In an article in Nature, Denise Garcia, a professor at the Northeaster University in Massachusetts, the US, calls for the world’s attention to focus on an “emerging AI cold war”. In March, she writes, the US’s National Security Commission on Artificial Intelligence (NSCAI) made a case for “integration of AI-enabled technologies into every facet of war-fighting” to remain competitive with China and Russia.

Contrast this with the EU’s guidelines that came in January, which say military AI “should not replace human decisions and oversight”. The NSCAI advocates against a ban on such AI-powered militarisation, calling instead for standards of use.

It has argued that a ban won’t work since countries can’t be trusted to comply—against such a backdrop, which country would like having a rival-nation’s capabilities be a sword hanging over its head? What the NSCAI needs to ask itself is, if a ban won’t work, what is the guarantee that standards of use will.

There is no predicting if AI systems will function as intended after deployment; sure, the leaps in technology will allow us to train these better, but there are far too many imponderables. Indeed, the only thing this will lead to proliferation, and the world will be forced to confront even greater instability that it faces now.

One of the biggest examples of this the Cold War pursuit of nuclear weapons and how this has led to even nations like North Korea acquiring nuclear capability.

A more humane use of AI needs to be imagined, and the big economies of the world each have a crucial role to play in this. If the pandemic has demonstrated anything, it is that the need is for greater global cooperation.