Following the disagreement between Anthropic and the Pentagon over a defence deal, which led the Trump administration to label the AI startup a supply chain risk, the US Department of Defense has escalated its standoff with Anthropic further. Pentagon’s Chief Technology Officer Emil Michael warns that the company’s Claude AI models could “pollute” the military supply chain due to built-in policy preferences.
In comments made on CNBC’s “Squawk Box” on March 12, 2026, Michael described Claude’s “constitution” as embedding a “different policy preference” that risks influencing critical defence systems.
Pentagon tech chief warns of Claude polluting defence systems
In the interview, Emil Michael stated, “We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences pollute the supply chain so our war fighters are getting ineffective weapons, ineffective body armor, ineffective protection.” He went on to state that the designation of supply chain risk is “not meant to be punitive,” since only a small portion of Anthropic’s business involves US government work.
Michael also stated that a gradual transition plan is needed since “we can’t just rip out Anthropic overnight.”
The controversy surrounding the Anthropic’s broken deal marks the first time a US-based company has been publicly labelled a supply chain risk – a classification usually applied to foreign entities posing national security threats. The Pentagon formally notified Anthropic of the designation earlier in March 2026, requiring defence contractors to comply with the non-use of Claude in Pentagon-related projects.
Anthropic-Pentagon clash: Claude AI to be phased out from defence networks
The conflict originally emerged when Anthropic refused to remove certain safeguards in its AI models, especially the ones that restricted the use of Claude AI models for fully autonomous weapons and mass domestic surveillance. This restriction clashed with the Pentagon’s demands for unrestricted “all lawful use” in military applications. Anthropic’s constitution prioritises safety over everything to ensure helpful, honest, and harmless responses.
After the designation of a supply chain risk, Anthropic has filed a lawsuit against the Department of Defense and related agencies, calling the designation “unprecedented and unlawful”. The Dario Amodei-led firm argues that the designation threatens hundreds of millions in government contracts.
In a separate company statement, Amodei noted the designation’s narrow scope applies only to direct use in DoD contracts, not all customer relationships. The company is pursuing legal remedies while engaging in discussions.
Despite the ban, Anthropic’s Claude AI recently came into controversy, several reports alleged its involvement with the Palantir targeting technology, which helped the US armed forces bomb several targets in Iran.
