1. The other side of killer robots: Here is all you need to know

The other side of killer robots: Here is all you need to know

At a time when a petition to keep a tab on killer robots is gathering steam, a section of critics is pointing out why it’s all wrong

By: | Published: August 27, 2017 4:24 AM
Suggestions that warfare will be transformed by artificially intelligent weapons capable of making their own decisions about who to kill are not hyperbolic. And it’s not impossible to ban weapon technologies.

Last week, over 100 entrepreneurs, including Elon Musk, wrote to the UN committee on autonomous weapons, warning of killer robots or the weapons that make their own decisions about when to kill. The letter from the experts says the weapons currently under development risk opening a ‘Pandora’s box’ that, if left open, could create a dangerous “third revolution in warfare”. Suggestions that warfare will be transformed by artificially intelligent weapons capable of making their own decisions about who to kill are not hyperbolic. And it’s not impossible to ban weapon technologies. Some 192 nations have signed the Chemical Weapons Convention that bans chemical weapons and an international agreement blocking the use of laser weapons intended to cause permanent blindness is in the burner. But weapons that make their own decisions are a very different, and much broader, category. The problem with this argument is that no letter, UN declaration, or even a formal ban ratified by multiple nations is going to prevent people from being able to build autonomous, weaponised robots. The barriers keeping people from developing this kind of system are just too low. So it’s not a great decision to call for a ban of such autonomous weapons.

The line between weapons controlled by humans and those that fire autonomously is blurry. It is a line that many nations—including the US—have begun to cross. Moreover, technologies such as robotic aircraft and ground vehicles have proved so useful that armed forces may find giving them more independence—including to kill—irresistible. As per reports, technology is set to massively magnify the military power of all nations and not just the US. Computer science professor Stuart Russell, from the University of California, has said two US Defence Advanced Research Projects Agency schemes could lead to the development of lethal autonomous weapon systems (LAWS). These include finding ways for drones to fly with pinpoint precision and work together in hostile environments.

Many experts believe that the US and other countries will not be able to stop themselves from building arsenals of weapons that can decide when to fire. The US Department of Defense does have a policy to keep a “human in the loop” when deploying lethal force. But it hasn’t suggested it would be open to an international agreement banning autonomous weapons. You don’t have to look far to find weapons already making their own decisions to some degree. One is the AEGIS ship-based missile and aircraft-defense system used by the US Navy. It is capable of engaging approaching planes or missiles without human intervention. The US has developed the Sea Hunter warship, which is designed to move and search for submarines autonomously. A drone called Harpy, developed in Israel, patrols an area searching for radar signals. If it detects one, it automatically dive-bombs the signal’s source. So what we really need, then, is a way of making autonomous armed robots ethical, because it is not easy to be able to prevent them from existing. Autonomous weapon systems are still in their early stages of development, but pressure from Musk and company has forced nation states into discussing their use. In the UK, the Taranis drone has been developed to fly autonomously. But instead of a full ban, a reworking of article 36 of the Geneva Convention is recommended to stop fully autonomous systems being used. Countries developing new weapons should consider whether they can be used under international laws.

Musk signed an earlier letter in 2015 alongside thousands of AI experts in academia and industry that called for a ban on the offensive use of autonomous weapons. But those against the ban say people concerned about autonomous weapon systems should consider more constructive alternatives to campaigning for a total ban. International laws such as the Geneva Convention that restrict the activities of human soldiers could be adapted to govern what robot soldiers can do on the battlefield, for example. Other regulations short of a ban could try to clear up the murky question of who is held legally accountable when a piece of software makes a bad decision, for example, by killing civilians. One of the big advantages of robots is that their behaviour is traceable. If one robot does something wrong, it’s possible to trace the chain of decisions that it made (decisions programmed into it by a human) to find out what happened. Once the error is located, it can be resolved, and you can be confident that the robot will not make that same mistake again. Furthermore, you can update every other robot at the same time. This is not something we can do with humans. Generally speaking, technology itself is not inherently good or bad: it’s what we choose to do with it that’s good or bad. The real question that should take centrestage is whether autonomous armed robots can perform better than armed humans in combat, resulting in fewer casualties on both sides? That answer should determine what we think about banning killer robots.

  1. No Comments.

Go to Top