Anthropic and the Pentagon have been at a crossroads concerning the new US deal. While Trump’s administration wants no association with Anthropic’s Claude going forward, the defence forces are facing trouble phasing out Claude AI from their various security systems. An internal Pentagon memo has reportedly introduced limited exemptions that could allow continued use of Anthropic’s Claude AI model beyond the mandated six-month phase-out period. The document, signed by Pentagon Chief Information Officer Kirsten Davies, acknowledges the practical difficulties of fully removing the technology from defence systems and supply chains.
Pentagon’s exemptions are tightly restricted to “rare and extraordinary circumstances” involving “mission-critical activities directly supporting national security operations where no viable alternative exists.” Any defence unit requesting such an exemption needs to submit a comprehensive risk mitigation plan for approval by senior leadership. The memo highlights prioritising the removal of Anthropic’s products from highly sensitive systems, including those related to nuclear weapons and ballistic missile defence.
Anthropic’s Claude may still be used for ‘extraordinary’ national security cases
This guidance follows President Donald Trump’s February directive ordering all federal agencies to immediately cease using Anthropic’s technology, with a six-month transition window (ending around September 2026). The order emerged from a breakdown in contract negotiations, where the Pentagon, seeking unrestricted “all lawful use” of Claude, demanded the company drop its safeguards against fully autonomous weapons and mass domestic surveillance of US citizens. Anthropic, however, refused, leading Defense Secretary Pete Hegseth to designate the firm a “supply chain risk” on March 5, 2026 – a label typically reserved for foreign adversaries.
The ban applies to direct usage by the Pentagon and also extends to defence contractors, who must certify compliance within 180 days. Contracting officers have 30 days to notify affected vendors.
Practical challenges to phase out Claude
Experts and analysts note that the memo reflects recognition of real-world hurdles in removing Anthropic’s technology from US defence networks. Franklin Turner, a government contracts lawyer at McCarter & English, told Reuters, “The memo is a recognition of the fact that it’s really hard for most vendors to certify they have removed the company from the entirety of their supply chain.” For instance, contractors may struggle to confirm their software is free of any open-source code or components originating from Anthropic.
Turner added that he expects “a flurry of waiver requests” as units seek to retain Claude in critical applications where alternatives fall short.
The Pentagon has confirmed the memo’s existence but declined further comment. Anthropic did not immediately respond to requests for comment on the latest development.
The guidance comes amid Anthropic suing the US government to block the ban and designation, arguing it violates free speech, due process, and constitutes retaliation for the company’s ethical stance on AI safety. Claude remains deeply embedded in defense workflows, including intelligence analysis, operational planning, cyber operations, and, according to prior reports, support for targeting in ongoing conflicts like operations against Iran.
