Remember how, within less than 24 hours of being created, Microsoft’s chatbot Tay had to be shut down because it became a Nazi-loving misogynist?
Remember how, within less than 24 hours of being created, Microsoft’s chatbot Tay had to be shut down because it became a Nazi-loving misogynist? The bot, unleashed on Twitter, was taught to learn from the tweets it read and the responses it got—with more Hitler-loving women-haters out there than we imagine, it wasn’t hard for Tay to adopt their persona. Google has just found something similar while trying to create an open data-set of drawings to help train machine-learning systems and to understand how they work. It got more than 800 million drawings from 20 million people in 100 nations on subjects ranging from cats to mugs, tables and chairs—mug handles, for example, pointed in opposite directions and chairs were drawn facing forward or sideways depending on the nation or part of the world they originated from.
While that’s good and allowed computers to understand how the same object was picturised in different parts of the world, what was not acceptable was some of the conclusions the machines came to. The most common form of shoe, it appeared, was a pair of lace-up sneakers—while that’s okay, what was not was the fact that women’s high heels were not recognised as a form of shoe. Doctors, similarly, were associated with men, almost never with women.
If two years on, we haven’t progressed much from the time when Google’s photograph app identified black people as ‘gorilllas’, it is because of a fatal flaw in any form of artificial intelligence. It reflects us, warts and all. The question then is, how do you code out our biases and, if you can, what do you replace that with? Till then, it is a safe bet that algorithms born out of a certain milieu will reflect its likes and dislikes even while professing to be neutral and objective.