Defense Secretary Pete Hegseth has given Anthropic’s CEO a firm deadline. According to AP, citing person familiar with a private meeting held Tuesday, Hegseth told Dario Amodei that Anthropic must allow its artificial intelligence technology to be used by the military without restrictions by Friday, or risk losing its government contract.
Anthropic is the company behind the chatbot Claude. Among major AI companies, it is the only one that has not fully agreed to supply its tools to a new internal US military AI network. Amodei has repeatedly raised ethical concerns about how AI could be used by governments. He has warned about the dangers of fully autonomous armed drones and AI systems that could be used for mass surveillance, especially if they are used to track dissent inside the country.
Last month, Amodei wrote in an essay: “A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow.”
Pentagon pressures Anthropic to open its AI for military use
During the meeting, defense officials reportedly warned that Anthropic could be labeled a “supply chain risk.” They also raised the possibility of using the Defense Production Act. That law could give the military broader authority to access the company’s products, even if Anthropic disagrees with how they are used. The details were first reported by Axios. Two officials, one familiar with the meeting and a senior Pentagon official spoke anonymously.
The situation shows a bigger debate in Washington about AI’s role in national security. There are growing concerns about how such powerful systems could be used in situations involving lethal force, sensitive data, or government surveillance. The standoff also comes as Hegseth has pledged to eliminate what he calls a “woke culture” within the armed forces.
Other AI companies are moving ahead
Last summer, the Pentagon announced contracts worth up to $200 million each for four AI companies: Anthropic, Google, OpenAI, and xAI, the AI firm founded by Elon Musk. Anthropic was the first to be approved for use on classified military networks, where it works with partners such as Palantir Technologies.
But that may soon change. According to the Pentagon official, Musk’s chatbot Grok is now ready for classified settings as well. The official added that other AI companies are “close” to reaching that milestone.
Hegseth made his position clear in a January speech at SpaceX in South Texas. He said he would shrug off AI models “that won’t allow you to fight wars.”
Two red lines Anthropic won’t cross
The person familiar with Tuesday’s meeting described the discussion as cordial. However, Amodei did not change his position on two firm boundaries Anthropic has set. The company will not allow its AI to be used for fully autonomous military targeting operations. It also refuses to support domestic surveillance of US citizens.
A senior Pentagon official said the Defense Department objects to built-in ethical limits because military tools must be available for all lawful uses. The official added that the Pentagon only issues lawful orders and said that it would be the military’s responsibility to ensure Anthropic’s tools are used legally.
After Tuesday’s meeting, Anthropic released a statement saying it “continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”
Anthropic has also found itself at odds with President Donald Trump’s administration. The company publicly criticised chipmaker Nvidia over Trump proposals that would loosen export controls and allow certain AI chips to be sold in China. Even so, Anthropic remains a close partner with Nvidia. The company and the Republican administration have also been on opposite sides of lobbying efforts over state-level AI regulation.
In October, Trump’s top AI adviser, David Sacks, accused Anthropic of “running a sophisticated regulatory capture strategy based on fear-mongering.”

