By Pooja Arora and Mukesh Kumar
At the first ever debate on Artificial Intelligence (AI) at the United Nations Security Council, the Secretary-General António Guterres called for an international body to regulate AI. It was reminiscent of the Baruch Plan brought by the USA to regulate nuclear weapons internationally at the UN in 1945. It aimed to establish an international control system to regulate nuclear weapons and atomic energy to prevent further proliferation. It failed due to geostrategic fault lines of the time.
In September 2017, Russian President Vladimir Putin said, “The one who becomes a leader in this sphere (i.e., AI) will be the ruler of the world.” The statement ignites unhindered excitement. One imagines killer robots in combat, powerful computers/humanoids assisting high ranking Generals in making strategic decisions in war rooms, autonomous machinery identifying targets, navigating difficult and inaccessible terrains while conducting ISR (intelligence, surveillance, and reconnaissance) operations. Artificial intelligence is the buzzword of the twenty-first century. Its applications in defence or militarization thereof have multidimensional ramifications for the fields of ethics, law, and international relations.
AI does not have a single definition. AI falls on a spectrum differentiated according to its endowment of decision-making, analytical ability, and functional utility. For example, a drone may be operated and directed by a human or act autonomously. AI also stands at the cusp of hardware-software interface. For example, existing machinery may be augmented by software to work under human direction or function completely autonomously making decisions without any human interface. It would need to be trained extensively for the latter.
The difference in various forms of AI can simply be understood as the differences between Cybermen (machine augmented humans from the show Doctor Who), C-3PO (a polite assistant android from Star Wars), Commander Data (sentient android from Star Trek), Vision and Ultron (antagonistic AI personalities/overlords from the Marvel Universe) and Borg (humanoids operating under a single command and control system from Star Trek). Interestingly, besides C-3PO, all the stated fictional AI characters have participated in or initiated wars based upon their exclusive visions of life in the universe. The twenty-first century version of combat is a case of imagination in science fiction coming to life.
The applications of the technologies popularly associated with the fourth industrial revolution were observed first hand in Ukraine. Extensive use of drones, GPS augmented Excalibur shells, increased efficiency of the ‘kill chain’ due to the use of Kropyva (an app that provides the location of Russian assets to Ukrainian artillery batteries) combined with the satellite internet provided by Starlink has shown a glimpse of future wars to military establishments across the world. The Russian military is waging electronic warfare using jammers to thwart Ukrainian advances. Large amounts of data being generated by intelligence assets (satellite, cyber, drones, humans etc.) are being analysed using AI assets by both sides.
The impact of AI is not limited to the tactical level affecting where, how, and if war is waged (i.e., the theatre and strategic levels). It affects acquisition of technology, personnel training, military-industrial complex, and strategic cultures of nations across the world. AI increases the likelihood of hybrid warfare and grey zone operations. Hitherto conventional theatres of war i.e., land, sea, air are supplemented with cyber, space, intelligence, psychological, and informational warfare. The key to winning any data driven, networked warfare lies in incapacitating communication and intelligence assets of the enemy. Because AI is a democratised technology being developed in the private sector with military applications, the cast of actors in a war may increase bringing civilians to the tactical level and being utilized as combatants in hybrid grey zone operations.
AI can process large amounts of intelligence data to make predictive analysis, adapt to enemy strategy quickly using pattern recognition, and provide information on logistical loopholes with accuracy. This transforms the command-and-control hierarchy, changes the speed at which wars can be fought and makes waging wars easier while sitting thousands of miles away using human operated robots or drones. As a thought experiment, a country may use predictive AI modelling to decide if waging war on an enemy would lead to a victory.
AI adapts quickly to enemy strategy. Conventional methods of analysis are slower and utilize previously known vulnerabilities which is inadequate when responding to an enemy that is using AI to design war strategy. It raises the question of responsibility of decision making. Cyber wars via AI can be fought with little human interface but who gives the ‘kill’ signal on the battlefield becomes a crucial question. With lethal autonomous weapon systems under development across the world, it is an urgent issue for international law and the discipline of ethics to take note of.
All domains of warfare from ISR to combat are being transformed by AI. Human intelligence is augmented by machine generated intelligence that combines data from satellites, social media, and the deep web. AI can replicate the personality of a human being using their digital footprint. The intelligence so generated could be made available to troops in the battlefield or other agencies, increasing efficiency of the kill chain. This can also be used to wage psychological and informational warfare against the enemy even before the actual combat commences. Increasingly sophisticated deepfakes are helpful to state and non-state actors in this endeavour. The impact of disinformation/misinformation on political stability of a country is well-known.
The ascendancy of technological innovations within the military sphere of Artificial Intelligence (AI) is poised to usher in a transformative paradigm shift. Nonetheless, it is imperative to acknowledge the attendant perils intrinsic to data-centric warfare. Furthermore, it is important to underscore that the genesis of technology, be it nuclear or AI, emanates from the human intellect. A harmonious synergy between human ingenuity and AI has the potential to harness the optimal capabilities while concurrently assuaging the burgeoning risks.
In retrospect, a poignant illustration of this interplay between human judgment and machine intelligence is discernible in the historical episode of 1983. During this pivotal juncture, the Soviet Union’s early warning satellite system erroneously identified the presence of five armed nuclear Intercontinental Ballistic Missiles (ICBMs) purportedly approaching its territorial boundaries, ostensibly from the United States. This critical misidentification had the ominous potential to precipitate a catastrophic nuclear conflict, either between the United States and the USSR or even culminate in a global conflagration, commonly referred to as the Third World War.
This momentous incident underscores the intrinsic superiority of human discernment over its machine counterpart. It was the astute judgment of Lieutenant Colonel Stanislav Petrov, an erudite engineer serving in the Soviet Union’s Air Defence Forces, that decisively averted the impending catastrophe.
An optimist would hail AI as the holy grail of war strategy and dream about ruling the world. An idealist would promote international regulation of AI. With nuclear weapons, the logic of deterrence was simple. But with the militarization of AI, there is an increased risk of miscalculation and miscommunication due to faulty data. The idea of collaboration between nations on AI is fraught with the fault lines of geostrategic competitions. The civilian applications of AI are simpler to regulate. It remains to be seen if regulation of its military applications will meet the fate of the Baruch plan.
Authors are Ph.D Scholars at Jawaharlal Nehru University. New Delhi.
Disclaimer: Views expressed are personal and do not reflect the official position or policy of Financial Express Online. Reproducing this content without permission is prohibited.