1. To prevent artificial intelligence from going rogue, here is what Google is doing

To prevent artificial intelligence from going rogue, here is what Google is doing

Google’s charting a two-way roadmap to prevent AI from going rogue or leading to undesirable outcomes.

By: | Published: July 12, 2017 5:32 AM
google, artificial intelligence, google artificial intelligence, Open AI, DeepMind DeepMind and Open AI propose to temper machine learning in development of AI with human mediation—trainers give feedback that is built into the motivator software in a bid to prevent the AI agent from performing an action that is possible, but isn’t desirable. (Reuters)

Against the backdrop of warnings about machine superintelligence going rogue, Google is charting a two-way course to prevent this. The company’s DeepMind division, in collaboration with Open AI, a research firm, has brought out a paper that talks of human-mediated machine-learning to avoid unpredictable AI behaviour when it learns on its own. Open AI and DeepMind looked at the problem posed by AI software that is guided by reinforcement learning and often doesn’t do what is desired/desirable. The reinforcement method involves the AI entity figuring out a task by performing a range of actions and sticking with those that maximise a virtual reward given by another piece of software that works as a mathematical motivator based on an algorithm or a set of algorithms. But designing a mathematical motivator to preclude any action that is undesirable is quite a task—when DeepMind pitted two AI entities against each other in a fruit-picking game that allowed them to stun the opponent to pick more fruit for rewards, the entities got increasingly aggressive.

Similarly, Open AI’s reinforcement learning agent started going around in circles in a digital boat-racing game to maximise points rather than complete the course. DeepMind and Open AI propose to temper machine learning in development of AI with human mediation—trainers give feedback that is built into the motivator software in a bid to prevent the AI agent from performing an action that is possible, but isn’t desirable.

Also Watch:


At the same time, Google has been working on its PAIR—People plus AI Research—project that focuses on AI for human use rather than development of AI for AI’s sake. This, however, should present a dilemma—developing AI for greater and deeper use for humans would mean, at some level, letting AI get smarter as well as intuitive, simulating human intelligence minus its fallibilities. But preventing it from going rogue, as the DeepMind-Open AI paper shows, would mean reining in AI—at least, in the short run—from exploring the full spectrum or intelligent and autonomous functioning.

  1. No Comments.

Go to Top