Commissioned, Curated and Published by Russ. Researched and written with AI.


What’s New

DateUpdate
2026-03-05Dario Amodei and Emil Michael (undersecretary of defense) are back at the negotiating table, per FT. Status: ongoing.
2026-03-01OpenAI confirmed Pentagon deal. Altman later acknowledged they “shouldn’t have rushed.”
2026-02-28Negotiations collapsed. Trump order issued. Hegseth supply-chain risk designation announced. OpenAI deal announced within hours.
2026-02-28Claude reportedly used in US military operations against Iran – same day as the ban.

One Phrase

In the final hours of negotiations between Anthropic and the US Department of Defense, the Pentagon made what sounded like a generous concession. They would accept Anthropic’s terms. All of them – except one phrase. They just needed Anthropic to delete: “analysis of bulk acquired data.”

Anthropic CEO Dario Amodei said no.

In a memo to staff seen by the Financial Times, Amodei wrote that the phrase “exactly matched the scenario we were most worried about.” He refused. Talks collapsed on Friday February 28. President Trump issued an executive order directing all federal agencies to stop using Anthropic tools. Defense Secretary Pete Hegseth announced he would designate Anthropic a supply-chain risk to national security – a label previously reserved for companies like Huawei and ZTE.

Within hours, OpenAI announced a new Pentagon deal.

The story has since moved fast: consumer backlash, diplomatic back-channels, and as of today (March 5), Amodei is back at the table with Emil Michael, the undersecretary of defense who last week called him a “liar” with a “God complex” on X.

But whatever happens in those negotiations, the more important question for engineering teams has already been answered. When corporate AI ethics policy collides with state power, which one governs? The first major real-world test produced a clear result.


The Timeline

To understand the engineering implications, you need to understand the sequence clearly.

Late 2025: Anthropic signed a $200 million contract with the Department of Defense to deploy Claude in classified government networks. This was a landmark – Claude became the first major AI model deployed inside classified US government infrastructure. The contract included usage restrictions: Claude could not be used for domestic mass surveillance or fully autonomous weapons systems.

January 2026: The Pentagon published its updated AI strategy, which called for military AI to be available for “any lawful use.” This framing was fundamentally in tension with Anthropic’s usage restrictions. The two positions were not compatible.

February 2026: Negotiations intensified. The DoD pushed for the removal of Anthropic’s specific safeguards. Anthropic pushed back. The specific sticking point, as Amodei later described it, was domestic surveillance – specifically “analysis of bulk acquired data,” the kind of large-scale signals intelligence collection that has historically been the domain of the NSA.

February 28, 2026: Talks broke down. The sequence that followed happened within a single day:

  • Trump issued his order banning federal use of Anthropic tools
  • Hegseth announced the supply-chain risk designation
  • OpenAI announced its Pentagon deal
  • Claude was reportedly used in US military strikes against Iran (via WSJ reporting)

That last item deserves to sit with you for a moment. On the same day Anthropic was banned for refusing to allow its AI to be used for bulk surveillance, Claude was being used in active military operations. The policy and the operational reality had completely diverged – and nobody appears to have hit a kill switch.

March 1-4, 2026: OpenAI CEO Sam Altman, facing significant consumer backlash (ChatGPT uninstalls surged 295%), acknowledged that OpenAI “shouldn’t have rushed” and called on the Pentagon to offer Anthropic equivalent terms. Claude downloads surged in the same period.

March 5, 2026 (today): Amodei and Michael are reportedly back in talks. The outcome is unknown at time of writing.


Whose Ethics Govern Your Deployment?

Here is the question this case forces into the open, and it is one that most enterprise engineering teams have not seriously asked: when you build on an AI API, whose constraints actually govern what that system can do?

The standard answer is “the vendor’s Acceptable Use Policy.” You sign up, you agree to the terms, the vendor enforces them. For most commercial use cases – marketing copy, code assistants, customer support bots – this works fine because nobody is testing the edges.

But the Anthropic/Pentagon case is the first high-stakes test of what happens when a major customer with significant leverage pushes directly against the edges of a vendor AUP. And the answer was not “the AUP holds.” The answer was “the contract falls apart.”

This has several practical implications for engineering and architecture decisions.

AUPs are negotiating positions, not law. Anthropic’s usage restrictions are contractual terms, not regulatory requirements. The Pentagon had legal authority to demand any lawful use of technology it was purchasing. Anthropic’s restrictions were only enforceable as far as Anthropic was willing to walk away from the contract – which, in this case, they were. Most commercial customers do not have the leverage to demand exceptions, but the principle holds: a vendor AUP is only as strong as the vendor’s willingness to enforce it.

The “safety-first” brand is a double-edged positioning. Anthropic built its market position explicitly on being the responsible AI lab. Their Constitutional AI approach, their usage policies, their refusal to rush capability development at the expense of safety – all of this has made them the preferred AI vendor for risk-averse enterprise customers. Banks, healthcare companies, legal firms, regulated industries: they chose Anthropic precisely because the guardrails felt solid.

But those same guardrails are why the US government banned them. The positioning that makes Anthropic trustworthy to risk-averse enterprise customers is the exact thing that made them incompatible with a government demanding “any lawful use.”

For enterprise risk teams doing vendor assessments: a vendor’s safety posture is not just a feature. It is also a potential supply-chain fragility. A vendor that refuses to compromise its policies under pressure from state actors will also refuse to compromise them under pressure from your procurement team. That is largely a good thing – but it is worth modelling explicitly.

The gap between policy and operational reality is real. The most operationally significant detail in this story is not the ban or the OpenAI deal. It is that Claude was reportedly used in active military operations on the same day Anthropic was banned. If accurate, this means that a policy decision at the vendor level did not translate into operational reality at the deployment level. The model continued running in systems that were supposed to stop using it.

For engineering teams: this is a prompt to audit your own deployment architecture. When a vendor policy changes – or when your organisation decides to switch models – how quickly can you actually enforce that at the infrastructure level? How many places does your AI model touch? How many are monitored? The gap between “we’ve changed the contract” and “the change is reflected in production” can be significant, and this case illustrates what that gap can look like at scale.


The OpenAI Comparison

OpenAI’s decision to sign a Pentagon deal within hours of Anthropic being banned was, at minimum, poor optics. At worst, it signals a meaningful difference in how the two companies approach the question of use-case limits.

OpenAI has historically had a more permissive approach to government and military partnerships. They have also, historically, had a less explicit set of usage restrictions than Anthropic. The Pentagon deal they signed is not, as of writing, fully public in its terms. What we do not know: whether OpenAI agreed to “any lawful use,” whether there are equivalent restrictions to what Anthropic had sought, or whether any restrictions in the contract are meaningful or merely cosmetic.

What we do know: Altman later said OpenAI “shouldn’t have rushed,” and called on the Pentagon to offer Anthropic the same terms they offered OpenAI. That is an interesting statement. It implies either that OpenAI’s terms were more favourable to Anthropic’s position than initially understood – or that Altman was engaged in reputation management after the consumer backlash. Possibly both.

The consumer reaction was sharp and fast. ChatGPT uninstalls surging 295% within days of the deal is not a trivial signal. It suggests that a meaningful segment of the consumer market does care about these distinctions – that the question of whether an AI company will allow its tools to be used for domestic surveillance matters to people deciding which chatbot to use.

For enterprise procurement: the question of whether your AI vendor has equivalent or weaker usage restrictions than Anthropic is now a live one. The OpenAI/Pentagon deal, whatever its terms, establishes a precedent. If OpenAI agreed to use-case terms that Anthropic refused, the two products are not equivalent from a governance standpoint. Any enterprise risk assessment that treated them as interchangeable needs to be revisited.


The Supply-Chain Risk Designation

The most technically consequential outcome of this story may be the supply-chain risk designation. This deserves careful attention because the implications extend well beyond Anthropic.

Supply-chain risk to national security is a designation with regulatory teeth. It is the same label applied to Huawei and ZTE, which resulted in those companies being effectively banned from US telecommunications infrastructure, from equipment procurement at organisations receiving federal funding, and from a broad range of government-adjacent contracts. The designation triggers restrictions under the National Defense Authorization Act and related procurement rules.

Applying this label to an American company – the first time it has been done – is significant for several reasons.

First, it establishes that the supply-chain risk framework can apply to domestic vendors. Previously, the implicit assumption was that this designation was a tool for managing foreign adversary technology. The Anthropic case has expanded its scope.

Second, organisations subject to government procurement rules – defence contractors, federally funded research institutions, critical infrastructure operators, healthcare organisations receiving Medicare and Medicaid funding – now need to assess whether using Anthropic products creates compliance exposure. This is not hypothetical. Procurement counsel and compliance teams at government-adjacent organisations need to be reviewing this.

Third, and most importantly for the broader industry: this designation is reversible. If Amodei and Michael reach a deal, the designation presumably goes away. But the mechanism has now been demonstrated. The US government has shown that it can use supply-chain risk designation as a lever in commercial negotiations with domestic AI companies. That changes the calculus for every AI company negotiating government contracts going forward.

For engineering teams at government-adjacent organisations: if you are currently running Anthropic products in any environment subject to federal procurement rules, you need to have a conversation with your legal and compliance teams now. The designation may or may not create immediate obligations, but the risk profile has changed and that change needs to be assessed explicitly.


The Pattern Is Established

The negotiations are ongoing. By the time you read this, Amodei and Michael may have reached an agreement, or talks may have collapsed again. The specific outcome matters less than what this case has already demonstrated.

Corporate AI ethics policies are not self-enforcing. They are contractual terms, and like all contractual terms, they are subject to renegotiation under pressure. The Anthropic/Pentagon standoff is the first major test of what happens when the counterparty has state power and the willingness to use it. The result was not “the policy held.” The result was either “we walk away” or “we renegotiate.”

Anthropic walked away. That is notable, and worth acknowledging. They held the line on a specific provision – domestic surveillance – that they believed mattered, even when walking away cost them the contract and earned them a supply-chain risk designation. That is not nothing. Most companies would not have done it.

But the broader lesson for engineering teams is less about Anthropic’s choices and more about the architecture of AI governance. When you build systems on AI APIs, the constraints on those systems are not technical. They are contractual. They depend on vendor willingness to enforce them, on your own operational discipline in implementing them, and on the legal and regulatory environment you operate in.

The Anthropic case shows all three of these layers operating simultaneously – and shows how quickly they can come apart. The vendor held firm. The operational reality diverged anyway. The regulatory environment changed in a single day.

The question for your organisation is not “does our AI vendor have good ethics?” The question is: “do we understand exactly which constraints govern our AI deployment, how they are enforced, and what happens when they fail?”

If you cannot answer that precisely, this is a good week to find out.


Sources: