OpenAI CEO Sam Altman wrapped up a lively AMA (Ask Me Anything) session on X (formerly Twitter) with a summary post highlighting three major insights from the discussion surrounding his company’s recent partnership with the US Department of War (DoW). The post highlights ongoing tensions in the AI community about power dynamics between governments and private tech firms, nationalisation risks, and the underappreciated efforts in maintaining national security.

Altman’s online discussion was initiated to address questions about OpenAI’s agreement to deploy its AI models on the DoW’s classified networks, a deal that includes safeguards against domestic mass surveillance and autonomous weapons. The session, however, drew thousands of engagements, blending support, criticism, and probing inquiries.

Altman’s insights on power, nationalisation, and security

In his summary, Altman first noted surprise at the level of debate on whether “democratically elected governments” or “unelected private companies” should wield more influence over advanced technologies like AI. “This seems like an important area for more discussion,” he wrote, acknowledging the divide but firmly favouring governmental authority.

Next, he addressed an underlying concern – the potential for government nationalisation of AI companies like OpenAI. While admitting he has contemplated the idea, even suggesting AGI development might benefit from being a government project, Altman downplayed its likelihood under current conditions. He stressed the value of close public-private partnerships to navigate AI’s transformative potential responsibly.

Third, Altman observed that many take national security for granted, crediting the “tremendous work” required to sustain it. He viewed this complacency as generally positive but called for greater appreciation of the efforts involved.

Altman expressed gratitude for the “reasonable and good-faith engagement,” which exceeded his expectations, and promised to respond to more questions later.

The internet, however, came up with more questions on Altman’s take on the chapter and how OpenAI might have played in the hands of the government agencies. One even called Altman “the master of emotional blackmail”.

How OpenAI entered the scene after Anthropic opposed the US DoW

The controversy began when Anthropic, led by CEO Dario Amodei, publicly refused the Pentagon’s (now Department of War) demands to remove built-in safeguards and contract restrictions on its Claude AI models for two key “red lines” – prohibitions on domestic mass surveillance and fully autonomous weapons that remove humans from the decision loop. Amodei highlighted that current frontier AI systems lack the reliability for such high-stakes applications and argued that removing these guardrails would put warfighters and civilians at risk. He offered collaboration on R&D to improve reliability but rejected unrestricted “any lawful use” access.

The DoW, under Defense Secretary Pete Hegseth and with backing from President Donald Trump, responded aggressively. Trump accused Anthropic of attempting to “strong-arm” the government and ordered federal agencies to cease using its technology. The Pentagon designated Anthropic a “supply chain risk”, effectively blacklisting it from government contracts and pressuring contractors to avoid its tools. Threats also included potential invocation of the Defense Production Act to force compliance. Anthropic vowed to challenge the designation legally, calling it baseless and dangerous.

Hours after the blacklist announcement, OpenAI announced its own agreement with the DoW to deploy models on classified networks. Altman highlighted shared safety principles, including the red lines on surveillance and human oversight for force decisions, technical safeguards, cloud-only deployments, and Frontier Defense Experts (FDEs) for oversight. OpenAI urged the DoW to extend identical terms to all AI companies to promote de-escalation and industry-wide standards.