Being human Not yet

Written by Pratik Kanjilal | Updated: Jun 19 2014, 06:14am hrs
What is it about the

Turing Test that brings out the worst in otherwise rational humans Is it the need to reach for a long-awaited science fiction utopia in which machines do all the heavy lifting while human leisure flourishes unabashedly Or is it the thrilling dread of a dystopia in which your daughter might elope with your bosss Roomba The latest bout of Turing mania broke out at the Royal Society in London, when the Turing Test 2014, organised by Reading University to mark the 60th death anniversary of Alan Turing, was won by a Russian-Ukrainian program (Putin must be loving this) named Eugene Goostman. To one-third of the judges, Goostman the machine successfully masqueraded as a Ukrainian boy aged 13 with poor English.

Though the worlds media made a big deal of it, it really isnt very exciting. This is no reflection on Goostman or his creators, Vladimir Veselov, who was born in Russia and now lives in the US, and Eugene Demchenko, born in Ukraine and now living in Mother Russia. Goostmans success is more a commentary on the catalytic power of geopolitics and globalisation than an affirmation of the arrival of machine intelligence. Thats largely because machine intelligence was here well before Goostman. It runs riot on the nanosecond trading networks that threaten to shoulder warm bodies out of the worlds bourses, leveraging fluctuations uncomputable by humans in real time (see Michael Lewiss new book Flash Boys). You see it at work every day, in the suggestions thrown at you by search engines, shopping sites and news aggregators, which try to suss you out and match your interests to their ad inventories. Soon, it will drive Googles driverless car through Americas streets.

True, these are specific, limited, goal-directed instances of intelligence, while Alan Turing had generic intelligence in mind when he formulated his testsomething that understands string theory but is weird enough to have watched Barbarella 16 times, curses freely in traffic but can also steer a genteel dinner table conversation. In short, human intelligence. But since the human population is burgeoning, and since carbon-based humans want help rather than competition, goal-directed intelligence may interest the market more than silicon-based humanoid minds.

The world of computing has moved on since Alan Turing outlined his imitation game in a 1950 paper, which turned the leaf on the philosophy of computing. The game has become known as the Turing Test, a name which suggests positive virtues such as credibility and certification. But Turings original term suggested that mimesis or duplicity would help to game a human judge. If it is a test at all, it appears to be a test of cleverness rather than intelligence.

While the discussion of artificial intelligence was astonishingly advanced in Turings timethe term neural networks entered the literature even before the formal birth of artificial intelligence in 1956little was understood about the human mind, and even less about possible machine minds. Turing was defining artificial intelligence without really knowing who would use it, or for what. Besides, he was sidestepping the difficult philosophical questions: What is intelligence What is sentience And what is humanity, anyway The practical way out of these imponderables was to state that if something appeared to be human, it could be taken to be human.

Turing applied a constraint to the model: the communication between the human judge and the masquerading machine had to be text-only, stripped of the visual and auditory cues of human communication. Four decades later, after the birth of the Internet, this had an unintended consequenceall the best-known wannabe Turings were online chatterbots. Most of them had womens names: Eliza, Julia, A.L.I.C.E. Perhaps this tells us something about the human mind and the superabundance of male programmers, rather than about intelligence.

Chatterbots were machines accessible over telnet, which is a blank command line with no sensory cues. They could bat the breeze for about five minutes before the human interlocutor got suspicious, because the machine kept trying to drag the conversation back to known turf. Julia, for instance, kept wanting to know if you were a dog person or a cat person. She was also obsessed with the hamster she had once lost.

Finally, much has been made of Turings statement that to pass his test the machine had to fool 30 out of 100 humans. This has been fetishised, and Eugene Goostman scored big by fooling 33. However, the context of the passage suggests that Turing was citing a ballpark figure, not a rule. Yet another reason why, now that artificial intelligence is on the brink of mass market maturity, it should regard Alan Turing as a path-breaking philosopher of logic and communications, not a literal algorithmist. Because the digital world has changed so much since his time that the practical details are now irrelevant.

[email protected]