Those warning about the need for artificial intelligence policy research may be articulating the fear of the unknown...
The industry and governments have turned artificial intelligence (AI) from an academic fascination to a potentially explosive technology, but there has been very little research into AI policy-making, though the need for it has been felt at least from the time of Isaac Asimov. Now, Elon Musk has jump-started the process, donating $10 million to the Future of Life Institute, which is launching a portal this week to seek applications and make grants globally, by open competition. It will fund research which ensures that AI remains beneficial to humanity.
Looking back, 2014 was the year of AI. The first truly driverless car from Google, with no user controls, hit the streets. That was a visible vote of confidence in machines that can reason and learn as well as humans, and take over functions which humans perform a little erratically. Some drivers don’t get a ticket all their lives. Some get tight and run over people sleeping on the pavement. Google’s machine should behave more consistently.
But 2014 ended with repeated warnings that AI is being developed as a purely technical project, with insufficient attention to policy and ethics. Stephen Hawking endorsed the capability of AI to improve human security while warning that associated risks are poorly researched. Elon Musk, the man behind PayPal, the SpaceX project and Tesla automobiles, spoke more clearly. He likened current AI research to ‘summoning the demon’.
Industry leaders in AI protested that baked-in limitations keep runaway cyborgs safely restricted between the covers of science fiction. AIs are not sentient in the way that humans are. They are task-oriented and, like obedient copies of Arjuna, see only the target and nothing else. They drive cars, manage inventories, play weatherman and, mimicking a trait of higher organisms, may be programmed to learn to do these better from experience. However, they do not want to overstep roles set by their programmers. Sebastian Thrun’s code, which runs Google’s driverless car, does not want to become Sebastian Thrun. A bot manning an internet chatroom does not want to create little bots to take over other chatrooms. Machines, however intelligent, have no idea about the human dream of overcoming one’s limitations.
But what Hawking and Musk are warning against is not the state of the art but the ‘technological singularity’, a projected cusp event that has fascinated scientists and futurists since the late 1950s, when it was articulated by the mathematician John von Neumann. The singularity is to technological growth what the speed of light is to conventional physics—the limit beyond which the world becomes incomprehensible. It is visualised as the point when machines evolve faster than organic life and become bafflingly intelligent, and their political and cultural products become unforeseeable. The result is a transhumanist future, a mashup of Ray Kurzweil and Francis Fukuyama in which it becomes as meaningless to talk of the man-machine interface as it is to talk about a brain-body interface.
While this sounds like a mass extinction of human civilisation, all that the singularity threatens is fundamental change in the way humans interact with each other and with the environment. It would not be the first cataclysmic progressive event in human history. Prior technologies like agriculture profoundly altered human culture. Language may have had an even more powerful effect, empowering humans to collaborate in complex ways to achieve previously unattainable goals across space and time. Such pre-technological ‘technologies’ had sweeping political and cultural effects, and may have triggered the same fears that AI is raising now.
How near is the singularity? Even the most radical estimates give us at least 15 years of peace and quiet before the human condition becomes dramatically different, but these are only educated guesses. The contention that AIs are specialists remains true, and they remain contained within areas like robotics, natural language processing and sensory perception, but the internet could, theoretically, make it trivial for them to gang up. Take Google, a company whose direction is explicitly set by a focus on machine learning, a central feature of artificial intelligence. Obviously, the learning resides not in clients but in Google’s distributed data centres. When one driverless car learns something new, presumably, all driverless cars learn it too. The connected learning environment has contributed greatly to the reliability of Google’s translation service.
But it is precisely such connectivities and distributed computing that raise public fears about AIs evolving autonomously. Human oversight would reduce the rate of learning unacceptably. Indeed, human intervention seems absurd in an industry which expects its next big wave from the internet of things, which would create new opportunities when connected ‘things’ are allowed to interact multilaterally. But before allowing a lot of autonomous objects from medical nano-devices to passenger cars to get cosy, perhaps we should try to peer beyond the singularity. Those warning about the need for AI policy research may be articulating the fear of the unknown. But, more importantly, they are stressing the need to know.