After a brief attempt to renegotiate the terms and conditions of the deal with the US Department of Defense, Anthropic has now been officially designated a supply chain risk by the Pentagon. The Pentagon has formally declared the San Francisco-based AI company and its products, including the popular AI chatbot Claude, as a “supply chain risk” to national security, effective immediately.
The Pentagon stated in an official statement that it has “officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately.”
The decision follows weeks of escalating tensions between the Trump administration and Anthropic, emerging from the company’s refusal to remove certain safeguards on its AI models. Anthropic had insisted on restrictions preventing the use of Claude for mass domestic surveillance of Americans or fully autonomous weapons systems without human oversight.
The Pentagon notified Anthropic leadership directly, stating it had deemed the company and its products a supply chain risk. This designation, typically reserved for foreign adversaries like entities linked to China or Russia, marks the first known application to a US-based company.
The action comes after President Donald Trump directed all federal agencies to cease using Anthropic’s technology. Defense Secretary Pete Hegseth, who now refers to the Department of Defense as the “Department of War,” had previously threatened the designation amid stalled talks. Last Friday, on the eve of heightened Iran-related developments, Trump and Hegseth accused Anthropic of endangering national security by maintaining its ethical guardrails.
Industry responds, Political backlash up
Military contractors have begun responding. Lockheed Martin announced it would comply with the directive and transition to alternative large language model providers, stating it expects “minimal impacts” as it is not reliant on any single vendor.
The designation’s scope, however, remains unclear — whether it prevents Anthropic’s use only in military-related work or more broadly across federal contractors. It could, however, force many defense partners to drop Claude.
Criticism has poured in from both sides of the political aisle. Democratic Sen. Kirsten Gillibrand of New York, a member of the Senate Armed Services and Intelligence Committees, called it “a dangerous misuse of a tool meant to address adversary-controlled technology,” describing the move as “shortsighted, self-destructive, and a gift to our adversaries.”
Neil Chilson, a former FTC chief technologist now at the Abundance Institute, labeled it “massive overreach” that could harm the US AI sector and the military’s access to top technology.
A letter from former defense and intelligence officials, including ex-CIA Director Michael Hayden and retired leaders from the Air Force, Army, and Navy, expressed “serious concern.” They argued the authority is intended to counter foreign infiltration, not penalise American companies for refusing to eliminate safeguards against mass surveillance and lethal autonomous weapons. “This is a category error with consequences that extend far beyond this dispute,” the letter stated.
What’s ahead for Anthropic?
Anthropic has vowed to challenge the designation in court, calling it “legally unsound” and unprecedented for a domestic firm. CEO Dario Amodei has described the Pentagon’s stance as demanding “any lawful use” without exceptions, a position the company says it cannot accept in good conscience.
The dispute has intensified Anthropic’s rivalry with OpenAI. OpenAI recently struck a deal with the Pentagon to provide ChatGPT for classified environments, potentially replacing Anthropic. OpenAI CEO Sam Altman later acknowledged the arrangement appeared “opportunistic and sloppy.”
Despite professional setbacks, Anthropic has seen a consumer surge, with over a million daily sign-ups for Claude this week, propelling it past competitors like ChatGPT and Google’s Gemini in app store rankings across more than 20 countries.
