With Google-owned artificial intelligence (AI) developer DeepMind’s AlphaGo programme beating a champion human player of Go, a strategy boardgame believed to be far more complex than chess, a new frontier has been breached in machine learning. The Economist reports that the AI-researchers fraternity had long held beating a human at Go—where players alternately place white and black stones on a grid of 19×19 squares to occupy the most territory—as the grand challenge for AI. The sheer number of moves (for every turn, there are at least 200 legal moves compared to chess’s 20) and the complexity defeats plain vanilla calculation as an approach to master the game.
While IBM’s Deep Blue, which beat one of chess’s all-time greats, Garry Kasparov in 1996, was programmed by humans, AlphaGo taught itself how to play Go and make game-related decisions. So, when it beat Fan Hui, the European champion, it was a victory for AI research everywhere. Machine learning elsewhere is already helping computers learn to recognise human voices and faces and respond to speech. So, if AlphaGo beats Korean Go player Lee Sedol, who is regarded as one of the greatest-ever and whom DeepMind has convinced for a face-off with the programme in March, the human brain might just have to learn to cede to AI.