What’s New

DateChange
2026-03-05Initial publication

Commissioned, Curated and Published by Russ. Researched and written with AI.


Three words. That’s what it came down to. “Analysis of bulk acquired data.” Four words, actually. Either way – a specific phrase tucked into a clause in Anthropic’s contract with the US Department of Defense. The Pentagon wanted it deleted. Dario Amodei refused. And within 48 hours, the Trump administration had designated Anthropic a “supply chain risk to national security,” directed every federal agency to stop using Claude, and OpenAI had stepped in to fill the gap.

This is the moment the AI safety community has been theorising about for years: what happens when a sovereign government decides it wants something different from what a lab’s ethics policy permits? We now have a live case study. And if you’re an engineering leader deploying AI systems, you need to be paying close attention – because the implications extend far beyond one company’s contract dispute.


The Standoff: What Actually Happened

Anthropic secured a $200 million contract with the DoD in 2025, making Claude the first major AI model deployed in US government classified networks. The deal came with Anthropic’s standard acceptable use framework attached – restrictions on using the technology for domestic mass surveillance and for fully autonomous weapons systems (systems that can make lethal decisions without human input).

Those restrictions, it turned out, were not acceptable to everyone in the Pentagon. Defense Secretary Pete Hegseth reportedly derided them as “woke.” The department pushed for the ability to use Claude for “any lawful use” – a formulation that, legally, covers a lot of ground.

Negotiations continued into early 2026. According to a staff memo from Amodei reported by the Financial Times, the DoD’s final position was essentially: delete the specific phrase about “analysis of bulk acquired data” and we have a deal. Amodei wrote that this phrase “exactly matched the scenario we were most worried about.” He declined.

What followed moved fast. Trump posted on Truth Social calling Anthropic “leftwing nut jobs” who had made a “DISASTROUS MISTAKE trying to STRONG-ARM the Pentagon.” Federal agencies were ordered to “IMMEDIATELY CEASE” use of Anthropic technology. Emil Michael, the under-secretary of defense for research and engineering, attacked Amodei publicly, calling him a “liar” with a “God complex.”

Then, within hours of the White House announcement, OpenAI announced it had struck a new deal with the Pentagon to supply AI to classified military networks. The timing was, to put it charitably, striking.

As of today – 5 March 2026 – Amodei is back at the negotiating table with the DoD, per the FT. Meanwhile, Claude has reportedly already been used in US military operations in Iran.

The public reaction was interesting. Claude saw a surge in app downloads. ChatGPT saw uninstall rates spike by a reported 295%. But commercial popularity is cold comfort when you have just been blacklisted by the largest single government spender on technology in the world.

This is the first time the “supply chain risk” designation – previously reserved for foreign adversaries – has been applied to an American company. That alone should make every engineer and engineering leader sit up.


The Global Patchwork: What Each Jurisdiction Is Actually Betting On

To understand why this moment matters, you need to understand the regulatory context it sits inside. Because AI governance is not a unified field – it’s a fragmented, actively contested space where different governments are making fundamentally different bets about what AI is, who it serves, and who gets to decide.

United States: Federal Preemption as Strategy

The US federal position under Trump is not simply “no regulation.” It’s more specific than that: no state-level regulation, no international constraints, and minimal federal constraints, with the government reserving the right to set the terms itself.

On December 11, 2025, Trump signed an Executive Order that established a DOJ AI Litigation Task Force whose explicit purpose is to challenge state AI laws in federal court. The legal theory is a Dormant Commerce Clause argument – that state-level AI regulation creates fragmented interstate burdens and therefore violates the constitutional framework for commerce. The practical effect, if successful, would be to nullify laws like Colorado’s algorithmic discrimination statute, which requires impact assessments for AI systems used in consequential decisions.

Simultaneously, Biden’s 2023 Executive Order 14110 on safe and trustworthy AI was revoked. The federal government’s posture is: we don’t want binding rules, we don’t want states making binding rules, and when we do want to use AI, we want to use it our way.

The Anthropic standoff is a direct expression of that posture applied to a procurement context. The government is not arguing the ethical restrictions are wrong in principle – it’s asserting that sovereign prerogative trumps corporate policy.

European Union: The Binding Bet

The EU AI Act is the most ambitious attempt to create binding, comprehensive AI law anywhere in the world. It’s risk-tiered: most AI use is lightly regulated or unregulated, but high-risk applications – which include biometric systems, critical infrastructure, law enforcement tools, and employment decision systems – face strict conformity assessments, transparency requirements, human oversight mandates, and data governance rules.

Some use cases are outright prohibited. Real-time remote biometric surveillance in public spaces is banned. Social scoring by public authorities is banned. AI systems that manipulate people through subliminal techniques are banned.

The high-risk requirements come into force from August 2026. That’s six months away. If you are deploying AI into any of those categories for European users or within the EU, you need to be building compliance infrastructure now, not in response to an enforcement notice.

The EU’s bet is that hard rules create a stable, trustworthy environment for AI adoption. It’s also a bet that “Brussels effect” dynamics – where the EU market is large enough that global companies build to EU standards by default – will extend European norms outward. That has worked with GDPR to a meaningful extent. Whether it works with AI regulation is a live question.

United Kingdom: The Voluntary Wager

The UK has explicitly chosen not to be the EU. The post-Brexit AI policy has been built around a “pro-innovation” stance: no single binding AI law, no dedicated AI regulator, sector-specific guidance from existing regulators (FCA, ICO, CQC, Ofcom depending on context), and voluntary commitments from frontier labs through the AI Safety Institute.

The AISI has done genuinely useful technical work – particularly on model evaluations and safety research. But voluntary commitments are not enforcement mechanisms. A lab can sign a commitment, and if a government then demands something that conflicts with it, the commitment has no legal teeth.

The UK is betting that being light-touch attracts investment and talent, and that safety can be achieved through scientific cooperation rather than legal constraint. It may be right. But the Anthropic case illustrates the ceiling on that approach: when a government with leverage wants something, a voluntary commitment is not a barrier.

China: A Different Axis Entirely

China’s approach to AI regulation is not primarily about safety in the Western sense. Since 2023, generative AI systems operating in China have been required to ensure their outputs align with “socialist core values.” There are content filtering requirements, real-name registration requirements, and heavy censorship obligations.

This is less about preventing harm to users and more about ensuring AI systems serve state objectives. The regulatory apparatus is also less coherent than the EU’s – it is a patchwork of rules from different agencies, often announced quickly and with limited implementation guidance.

For most Western engineering teams, the practical implication of China’s regulatory regime is: if you want to operate there, you build a separate system with different constraints. The ethical framework baked into models trained on Western data with Western AUPs is not deployable in China without substantial modification.

The Rest of the Landscape

A few others worth noting for completeness. Canada’s Artificial Intelligence and Data Act (AIDA) has been progressing through Parliament but remains unfinished – it was introduced in 2022 and has faced repeated delays. Brazil is further along on an AI bill that takes a rights-centric approach influenced by its existing digital rights framework (LGPD). India has a voluntary framework that leans heavily toward pro-innovation positioning, similar to the UK. Japan similarly favours voluntary guidelines and has positioned itself as a bridge between the Western and Asian approaches.

The pattern is roughly: binding law is the EU, voluntary frameworks everywhere else, with China as a separate track focused on content control rather than safety.


What This Means for Engineering Teams

Let’s get concrete. You’re a senior engineer or engineering leader. You’re deploying AI agents – whether that’s an LLM-based customer service system, a document processing pipeline, a coding assistant, or something more complex. The Anthropic story and the regulatory landscape raise three questions you need to have answers to.

1. Whose ethics govern your deployment?

When you use an AI system, you’re operating under a stack of ethical constraints. At the bottom is the model provider’s acceptable use policy – Anthropic’s AUP, OpenAI’s usage policies, Google’s terms. Those are contractual. Violate them and you lose access, potentially face legal action.

Above that is your own organisation’s AI policy. Hopefully you have one. If you don’t, the provider’s AUP is your de facto policy, which means someone else’s decisions are governing your deployments.

Above that is the law of the jurisdiction you operate in. That includes sector-specific regulation (finance, healthcare, education all have overlapping regimes) and general AI regulation where it exists.

The Anthropic case illustrates that these layers can conflict. A government can demand that a provider loosen its AUP – or, more realistically for most organisations, a government can demand that a system operate in a way that your provider’s AUP doesn’t permit. Understanding your position in that stack is not optional.

2. Corporate ethics policies are not law – and can be overridden by state power

This is the central lesson of the Anthropic standoff, and it deserves to be stated plainly. Anthropic built what was, by most accounts, a serious and thoughtful ethical framework. It was detailed enough that it ended up in contract language with the DoD. And when a state actor decided it wanted something different, that framework became a bargaining chip.

For enterprise deployments, the implication is this: the protections you think you have from a provider’s policy exist only as long as the provider has the power and the will to enforce them. That’s not nothing – Amodei’s decision to walk away from a $200M+ contract is not nothing. But it is not the same as a legal right.

If there are specific uses of AI that your organisation genuinely cannot permit – for regulatory, ethical, or reputational reasons – you need those constraints to exist at multiple layers of the stack, not just in your provider’s AUP. That means your own system architecture, your own logging and audit infrastructure, and where possible, contractual protections with your provider that survive pressure from third parties.

3. Regulatory fragmentation creates real compliance complexity

If you’re building AI systems for a global audience, you are simultaneously navigating:

  • EU AI Act high-risk requirements (binding, August 2026)
  • UK sector-specific guidance (voluntary but increasing in specificity)
  • US federal preemption that may nullify state protections you were relying on
  • Country-specific rules in any market you operate in

The DOJ Litigation Task Force is particularly significant here. Several US states – most notably Colorado with its algorithmic discrimination law – have passed or are passing substantive AI regulation. If the federal preemption strategy succeeds, those protections evaporate. Engineering teams who built compliance programs around Colorado’s requirements may find themselves working to rules that are being challenged in federal court.

The practical advice here is boring but important: do not build compliance solely around the most permissive framework you can find. The landscape is moving fast, enforcement is coming, and a system architected to just-barely-meet today’s requirements is going to be expensive to remediate when the rules tighten. Build in the margin.


The OpenAI Comparison: What the Timing Tells You

Sam Altman’s announcement of the OpenAI-Pentagon deal, coming within hours of the White House targeting Anthropic, has been widely read as opportunistic. That reading is probably correct. It’s also understandable – OpenAI is a company with its own commercial pressures, and a competitor being blacklisted creates an opening.

Altman was careful to assert that OpenAI’s deal includes prohibitions on domestic mass surveillance and autonomous weapons. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force,” he wrote. He also called on the Pentagon to “offer these same terms to all AI companies.”

The charitable interpretation is that OpenAI achieved in a few hours what Anthropic spent months failing to achieve: a deal with meaningful ethical constraints intact. If that’s true, it raises the question of why the Pentagon was willing to accept those terms from OpenAI but not from Anthropic.

The less charitable interpretation – advanced by Amodei in his staff memo – is that “messaging from the Pentagon and OpenAI was just straight up lies about these issues or tries to confuse them.” Without seeing the full contract text, it’s not possible to adjudicate this.

What we can say is this: the optics of the situation created a strange market dynamic. ChatGPT uninstall rates surged. Claude downloads spiked. There is apparently a portion of the public that was aware enough of the ethical stakes to express a preference with their behaviour. That is a data point about where enterprise and consumer AI is heading – and it suggests that the companies with the clearest, most credible ethical frameworks will have a real commercial advantage as AI becomes more embedded in daily life.


Closing: Negotiating Positions, Not Law

Here’s the hard truth that the Anthropic case makes unavoidable: corporate AI ethics policies are not law. They are, as the last few weeks have demonstrated, negotiating positions.

Anthropic wrote a careful, detailed ethical framework. It embedded that framework in its contracts. When a sovereign government decided it wanted something different, that framework became the subject of a negotiation in which the government held most of the leverage – access to one of the largest procurement markets in the world, the ability to designate a company a national security risk, and a sitting president willing to post hostile messages to 50 million followers.

That Amodei walked away, and apparently held his position for weeks, is genuinely notable. The first-ever application of the “supply chain risk” designation to an American company is a significant escalation of state power over private AI governance. The fact that Anthropic is back at the negotiating table today doesn’t mean they’ve capitulated – but it does mean the negotiation continues, and state power has not retreated.

For engineering leaders, the implications are structural. Do not build your AI compliance program on the assumption that your provider’s ethics framework is stable and sovereign. It may be excellent – Anthropic’s is. But it exists in a world where governments have interests, and those interests do not always align with responsible AI deployment.

Build your own constraints. Audit your own systems. Understand which jurisdictions’ laws actually apply to you. And watch what happens next in this case – because however it resolves, it will set the pattern for every AI ethics standoff that follows.


Sources: CNBCThe GuardianGuardian Opinion (Sanders/Schneier)Washington Post