Tesla and SpaceX CEO Elon Musk has now jumped into the ongoing controversy involving Anthropic and the US Department of War, taunting Dario Amodei’s firm to be the “most hypocritical company”. The comment from Musk comes as several reports link Anthropic’s Claude AI model with the deadly strikes on Iran as part of the US military operations against Iran. Musk’s comment quoted a message from AI safety advocate Holly Elmore, urging Anthropic employees to quit, accompanied by a screenshot alleging Claude’s involvement in targeting decisions that contributed to civilian casualties, including the deaths of over 100 young girls in an Iranian elementary school.

AI’s role in the Iran campaign amid ethical clash with Pentagon

The controversy emerges from a Washington Post investigation report that claims Claude was integrated into Palantir Technologies’ Maven Smart System – a Pentagon-operated platform used to identify and prioritise targets during the initial wave of US and Israeli strikes on Iran. According to the report, the AI system enabled the military to strike approximately 1,000 targets within the first 24 hours of the campaign, codenamed Operation Epic Fury, by suggesting hundreds of potential sites, providing precise coordinates, and ranking them by strategic importance. 

This integration with AI reportedly accelerated battle planning from weeks to real-time operations, significantly affecting Iran’s counterstrike capabilities.

Just as the US armed forces were busy planning the operations, Claude’s maker, Anthropic, was involved in a tussle with the US government, which wanted to use the Claude model for powering its autonomous weapons and domestic mass surveillance efforts. Since Anthropic declined the deal, President Donald Trump ordered federal agencies to cease using Anthropic’s technology, labelling the company a “supply chain risk” due to its refusal to remove ethical safeguards limiting military applications. 

Anthropic has consistently opposed the use of its AI for fully autonomous weapons or mass surveillance in the public, insisting on human oversight in critical decisions. Despite the ban, military officials continued relying on Claude through Palantir’s platform, with sources indicating that phasing it out could take up to six months due to its deep embedding in classified systems.

AI’s role in civilian attacks: Is Claude involved?

The strikes have drawn intense international scrutiny, particularly over civilian casualties. The controversial bombing of a school in southern Iran, killing at least 150 to 175 people, has raised suspicion about the potential involvement of AI in the activity. Iranian officials blamed US-Israeli forces, while US military investigations have pointed to likely American responsibility, with evidence from weapon fragments suggesting a US cruise missile was involved. Early analysis indicates the strike may have targeted an adjacent Islamic Revolutionary Guards Corps naval base, but it devastated the school instead. Neither party has commented officially on the involvement of the AI in the strikes.

Critics, including Elmore and Musk, argue that Anthropic’s involvement undermines its public commitment to AI safety. The screenshot in Elmore’s post highlighted the irony, stating, “Given that the Iranian elementary school was hit on the first day of the war, it seems fairly likely that Claude played a role in the selection of that target and thus in the death of more than 100 young girls — many times more kids than were killed in the worst American school shooting.” 

The internet reactions ranged from calls to boycott Anthropic to defending its stance against unrestricted military AI use.