The stan-doff between Anthropic and the US government has largely been framed as a principled fight over artificial intelligence (AI) safety and ethics. In important respects, that framing holds. Anthropic’s refusal to dilute safeguards against domestic mass surveillance and fully autonomous lethal use marks a rare effort by a major technology firm to impose limits on state power—even at commercial cost. At a time when most large technology providers prefer accommodation over confrontation, that stance warrants recognition.
Yet the episode also exposes a deeper inconsistency that extends beyond one company. Anthropic’s ethical red lines appear firm when the risk concerns domestic surveillance of citizens, but more elastic when similar technology is deployed in external military operations. The reported use of its systems in the ongoing conflict with Iran underscores an uncomfortable truth: for many large technology firms, ethical restraint remains bounded by geography and citizenship. Harm enabled abroad is treated as categorically different from harm at home. That distinction may be legally defensible; morally, it is thin.
Geography of Ethics
The tension is not unique to Anthropic. It reflects a broader pattern in Big Tech’s engagement with state power. Companies increasingly speak the language of safety and responsibility—but only so long as those principles do not obstruct the strategic imperatives of the nations in which they are headquartered. The result is an ethics framework that sounds universal in rhetoric but proves national in application.
The vocabulary of global norms often yields quickly to the logic of national security. There is a historical echo. During the atomic age, scientists involved in the Manhattan Project wrestled with the consequences of creating a weapon whose use soon escaped their control. Early moral unease did little to slow the strategic logic of the state once the technology proved decisive. AI is not a singular catastrophic invention in the way nuclear weapons were, but the structural similarity is striking. Once a technology becomes central to national power, the space for its creators to impose meaningful limits narrows rapidly.
Manhattan Project Echo
That dynamic is amplified by the changing nature of warfare. Contemporary conflict revolves less around mass mobilisation and more around information dominance, speed, and decision advantage. AI systems that synthesise intelligence, model scenarios, and compress decision cycles offer states a decisive edge. That advantage accrues disproportionately to countries that both deploy such systems and control their development—in practice, the US and China. Smaller states and non-aligned actors risk becoming dependent on platforms whose ethical boundaries they do not set.
As Anthropic clashes openly with Washington, OpenAI has positioned itself as a more adaptable partner, willing to work within government frameworks rather than challenge them. The contrast raises difficult questions. Is ethical resistance viable only until a more compliant supplier steps in? And if so, do corporate principles meaningfully constrain state use of AI—or merely determine which firms are rewarded? In a competitive marketplace, restraint can become a commercial disadvantage.
The larger risk is that AI accelerates a new form of technological imperialism—less visible than territorial conquest, but no less consequential. Anthropic’s stand is therefore both commendable and incomplete. It affirms the need for limits while revealing how fragile those limits become when technology, war, and national interest converge. The challenge ahead is not simply to make AI safer, but to confront how concentrated technological power can quietly redraw the global order—and to ask who, if anyone, has the authority to set its boundaries.
