By Vinod Dhall
Who must be in control of AI? That question lay at the heart of the recent stand-off between the Pentagon and AI lab Anthropic over the use of the latter’s next-generation AI assistant and family of large language models, Claude. Anthropic, one of the leading players in the AI field, claims that Claude is designed to be not only fast, accurate, and capable of complex reasoning but also secure and safe.
The fight with the Pentagon erupted over Anthropic’s refusal to allow its AI model to be used for certain purposes like domestic surveillance within the US or be militarised for generating autonomous weapons. Perfectly understandable for a company that values clear guard rails for its AI usage, preferring to call its AI models “constitutional AI”.
The Trump administration responded harshly by declaring Anthropic a “supply chain risk” , a designation normally reserved for suspect foreign supply firms.
A little while later, Anthropic’s close competitor, OpenAI—where Anthropic founder Dario Amodei previously worked and then left over ethical issues—snatched the opportunity to avail of the US government’s massive orders and became a supplier to the Pentagon. The absence of like-mindedness among AI leaders has done the damage.
The big tech rivalry
The intense rivalry between the top AI firms such as Anthropic, OpenAI, Perplexity, Elon Musk’s xAI, and even Google and Microsoft is well known. There is a blitz race among them to bring out the most advanced, most intelligent and biggest models, and to gain scale in terms of users and revenues—the mantras are bigger, faster, hyper-intelligent, multipurpose.
In the process cute little concerns like safety and usage restrictions can fall by the roadside. These were flagged by Anthropic’s Amodei and he has paid a heavy commercial price for doing so.
In his essay “The Adolescence of Technology”, Amodei warns of the threats that AI brings to humanity such as the emergence of AI-empowered terrorism, autonomous weapons, AI-based mass surveillance, and, frighteningly, the possibility to kill millions through the development and abuse of biological weapons.
The problem is further complicated by the real-life concern that if one AI lab even agrees to in-build some ethical restraints on the capabilities or end use of AI, other competitors may yet forge ahead unmindful of such moral hazards. Similar warnings have been outlined by Mustafa Suleyman, author of The Coming Wave who was previously founder of Inflection AI and is now the head of Microsoft AI.
Sometime back, an article in The Economist highlighted four ways in which powerful AI can go wrong and cause immeasurable harm to human civilisation at scale. Principal among these is “misuse” by a malicious individual or group for causing deliberate damage. Second is “misalignment” between what was wanted by its creator and what the AI itself might desire—a case of intelligent AI going rogue.
Third, just like humans, the AI might make a “mistake” when confronting real-world complexities, preventing the AI from fully understanding the implications of its actions. Fourth, the possibility of structural risks where no one AI model is to blame but yet harm has resulted.
In his article, Amodei too warns of the unparalleled harm that can result from AI in various ways: autonomy risks where AI can have malicious intentions and goals of its own; malicious actors can take hold of AI for destruction—he talks, for instance, of a dictator or an evil rogue building an AI model and using it to gain decisive, dominant power over the world; and vast disruption to the global economy, causing mass unemployment and radically concentrating wealth in the hands of a few.
Both Amodei and Suleyman have in their own way urged that while AI is still evolving and growing up (in an adolescent stage!), the world must strive hard to regulate its growth within moral boundaries.
Governing AI
AI is too powerful a technology, and its capabilities too far-reaching for its control to be left entirely in the hands of private actors. Nor can governments be at liberty to deploy it for questionable objectives like mass surveillance or autonomous weapons, the very things Anthropic objected to. AI is like a public good; civil society must necessarily get involved and be vigilant about its misuse.
While the AI technology is still evolving and growing, preventive steps need to be instituted at this stage. First, at the technological level itself, the models must be trained with moral guard rails factored in and by incorporating suitable controls. Audit systems, both voluntary and mandatory, have to be developed to ensure transparency and accountability. AI labs and businesses have to agree to built-for-purpose standards and standard operating procedures.
While deploying AI, businesses should incorporate red lines for containing its misuse. Governments, without interfering in ways that can freeze efficiency and competition, must institute just the right amount of regulation-blocking paths that could lead to destructive misuse.
Just as with other path-breaking technologies (eg. nuclear power), which can be used immensely beneficially as well as dangerously, global consensus and cooperation are essential. An international oversight agency like the International Atomic Energy Agency should be established for AI, armed with adequate powers of inspection and regulation.
As Amodei has noted: Humanity needs to wake up.
The writer is former head, Competition Commission of India, and Secretary, Government of India
Disclaimer: The views expressed are the author’s own and do not reflect the official policy or position of Financial Express.
