Anthropic CEO Dario Amodei has criticised rival OpenAI’s recent agreement with the US Department of Defense (Pentagon), labeling it “safety theatre” in an internal memo to employees. The controversy emerged from collapsed negotiations between Anthropic and the Pentagon last week on the use of the Claude model for deeper use in weapons systems. Anthropic held a $200 million defense contract but refused to proceed with an expanded deal unless the Department provided explicit written assurances prohibiting the use of its technology for mass domestic surveillance of US citizens or fully autonomous lethal weapons.
The Pentagon, however, declined to offer such constraints in its contract, eventually leading Anthropic to back out of the deal. Defense Secretary Pete Hegseth subsequently designated Anthropic a potential supply chain risk, prompting President Trump to order federal agencies to cease using its tools.
Hours after the fallout became public, OpenAI announced its own deal with the Pentagon to deploy its AI models, including ChatGPT, in classified military environments.
OpenAI’s deal includes safeguards
In a blog post and statements from CEO Sam Altman, OpenAI highlighted that the agreement includes prohibitions on mass domestic surveillance (which it described as illegal and not covered), human responsibility for the use of force (including bans on unconstrained autonomous weapons), and restrictions on high-stakes automated decisions. The company also highlighted technical safeguards, cloud-only deployment without “guardrails-off” models, and no edge-device access that could enable lethal autonomy.
Following backlash, OpenAI amended the deal to add clearer language affirming no intentional use for domestic surveillance of US persons and barring deployment by intelligence agencies like the NSA without further contract modifications. Altman acknowledged the initial announcement appeared “opportunistic and sloppy” and “rushed,” while defending the move as an effort to de-escalate and ensure warfighters have access to advanced tools under strong principles.
Amodei condemns OpenAI’s move
Amodei, in his memo reported by The Information, accused OpenAI of prioritising internal employee optics and placating staff over genuine safeguards. He described OpenAI’s public framing of the deal as misleading and “straight up lies,” claiming the company accepted elastic language around “all lawful purposes” that could shift with changing laws or national security priorities.
Amodei contrasted this with Anthropic’s approach, asserting his firm “held our red lines with integrity” rather than engaging in what he called performative safety measures to manage perceptions. He further alleged that OpenAI’s leadership, including Altman, had offered “dictator-style praise” to the Trump administration, unlike Anthropic, which has supported AI regulation and been candid about policy issues like job displacement.
The rivalry has spilled into public view with significant repercussions. Following the announcements, Anthropic’s Claude app saw a sharp rise in US App Store rankings, while reports indicated a nearly 300% surge in ChatGPT deletions from phones. Anthropic positions itself as uncompromising on safeguards that protect democratic values, even at the cost of lucrative contracts. OpenAI, meanwhile, argues its agreement incorporates robust protections aligned with existing laws and policies, while enabling responsible defense collaboration.
