The CEO of Anthropic, Dario Amodei, has apologised for a leaked internal memo criticising US President Donald Trump, saying it does not represent his current views. “The leaked memo does not reflect my careful or considered views and is an out-of-date assessment of the current situation,” he told reporters.

Before speaking to the media, Amodei had already issued an apology in a blog post about the company’s ongoing discussions with the Department of War.

“I also want to apologize directly for a post internal to the company that was leaked to the press yesterday. Anthropic did not leak this post nor direct anyone else to do so – it is not in our interest to escalate this situation,” the blog said.

The company explained that the memo was written shortly after a series of developments involving the Trump administration and the Pentagon.

“That particular post was written within a few hours of the President’s Truth Social post announcing Anthropic would be removed from all federal systems, the Secretary of War’s X post announcing the supply chain risk designation, and the announcement of a deal between the Pentagon and OpenAI, which even OpenAI later characterised as confusing.”

“It was a difficult day for the company, and I apologize for the tone of the post. It does not reflect my careful or considered views. It was also written six days ago, and is an out-of-date assessment of the current situation,” the statement added.

US government designates Anthropic as ‘Supply Chain Risk’

The US government confirmed that the AI company has been officially designated a “Supply Chain Risk” (SCR). The designation was made by the United States Department of War (formerly the United States Department of Defense), prompting Anthropic to say it plans to challenge the move in court.

In a blog post on Thursday, Amodei said the company received a letter from the Department of War on March 4 confirming the designation. He added that Anthropic believes the decision is not legally justified and will pursue legal action.

“Yesterday (March 4) Anthropic received a letter from the Department of War confirming that we have been designated as a supply chain risk to America’s national security. We do not believe this action is legally sound, and we see no choice but to challenge it in court,” Amodei said.

Anthropic questions legal basis of designation

He explained that the law cited by the government – 10 USC 3252 – is “narrow” and meant to protect the government rather than punish a supplier. According to the company, the SCR designation does not restrict the broader use of its AI model Claude or business relationships that are unrelated to Department of War contracts.

Amodei also said the company has been in discussions with the department in recent days.

“The War department and our company are both committed to advancing US national security and agree on urgency of applying AI across government. The language used by the DoW matches our statement on Friday, that vast majority of customers are unaffected by SCR designation,” he told reporters.

Amodei said the company’s priority is to ensure that national security experts continue to have access to important AI tools. “Our only concerns have been our exceptions on fully autonomous weapons and mass domestic surveillance. We will provide our models to DoW and the national security community, at nominal cost and with continuing support from our engineers,” he said.

He added that Anthropic will continue providing its models for as long as necessary to support the transition, as long as it is allowed to do so.

The CEO also clarified that the SCR designation only applies to the use of Claude in contracts directly linked to the Department of War and does not affect all customers.

Impact limited to specific government contracts

According to the company, the designation only affects the use of Claude in projects directly tied to Department of War contracts. It does not impact general access to the company’s AI tools.

This means individual users and commercial customers can continue using Claude through the company’s API, claude.ai, and other products without any change.

For contractors working with the Department of War, the designation would only affect the use of Claude for specific government contract work. Their use of the AI system for other clients or commercial purposes would remain unaffected.