The chapter involving the US Department of War and Anthropic seems to have taken a different turn hours after President Donald Trump banned the use of Claude AI within government departments. The US military reportedly continued to rely on the company’s Claude AI model during recent airstrikes on Iran, just hours after President Donald Trump ordered federal agencies to halt its use, according to a report by The Wall Street Journal.

Citing people familiar with the matter, the report revealed that US Central Command (CENTCOM) in the Middle East employed Claude for critical operational support, including intelligence assessments, target identification, and simulating battle scenarios ahead of and during the strikes. The usage of the AI chatbot continued despite the administration’s directive, highlighting the deep integration of Anthropic’s technology into defence workflows even as political fallout intensified.

US Military used Anthropic Claude for strikes on Iran

The ban followed theongoing friction between Anthropic CEO Dario Amodei and the US DoW. Amodei had refused the Pentagon’s demands for unrestricted access to Claude, insisting on safeguards against mass domestic surveillance and fully autonomous weapons. In response, Trump labelled Anthropic a national security risk, in his post on Truth Social, directing an immediate halt to its use across federal agencies. However, he also mentioned that it would take approximately six months for the Department of War to phase out Anthropic’s technology, which may have allowed the US Military to utilise Claude AI for the Iran strikes.

Trump reportedly stated, “We don’t need it, we don’t want it, and will not do business with them again!” while calling the company’s stance a “disastrous mistake” and its staff “left-wing nut jobs” whose actions risked American lives and national security.

Before the attack on Iran, WSJ had also reported that Anthropic’s AI model was also used by the US during the capture of Venezuela president Nicolás Maduro.

Anthropic has vowed to challenge the “supply chain risk” designation in court, describing it as “legally unsound” and a dangerous precedent for any US company negotiating with the government. The company reiterated its firm position, stating, “No amount of intimidation or punishment from the Department of War will change our stance on mass domestic surveillance or fully autonomous weapons.” It clarified that the designation primarily affects Department of War contracts, leaving other operations unaffected.

Anthropic’s AI used for military operations

The strikes on Iran occurred amid heightened US-Israel-Iran hostilities, with coordinated US-Israeli operations reportedly targeting key sites in Iran following stalled nuclear talks and claims of resumed Iranian activities, leading to the elimination of Iran’s Supreme Leader Ali Hosseini Khamenei. The WSJ report highlights how crucial AI tools like Claude have become in military planning despite the administration’s push to sever ties.

Anthropic has not commented further on the specific usage in the Iran operation, while the Pentagon and White House have yet to issue official responses to the WSJ claims.