​​​
  1. Microsoft showcase AI bot that makes phone calls to humans

Microsoft showcase AI bot that makes phone calls to humans

At an AI event in London on Tuesday, Microsoft CEO Satya Nadella revealed that the company's Xiaoice social chat bot has 500 million "friends" and more than 16 channels for Chinese users to interact with it through WeChat and other popular messaging services.

By: | San Francisco | Published: May 23, 2018 3:16 PM
An electronic Microsoft logo is seen at the Microsoft store in New York City, July 28, 2015. REUTERS/Mike Segar

While Google Duplex, which lets AI mimic a human voice to make appointments and book tables through phone calls, has mesmerised people with its capabilities and attracted flak on ethical grounds at the same time, Microsoft has showcased a similar technology it has been testing in China. At an AI event in London on Tuesday, Microsoft CEO Satya Nadella revealed that the company’s Xiaoice social chat bot has 500 million “friends” and more than 16 channels for Chinese users to interact with it through WeChat and other popular messaging services. “Microsoft has turned Xiaoice, which is Chinese for ‘little Bing’, into a friendly bot that has convinced some of its users that the bot is a friend or a human being. Xiaoice has her own TV show, it writes poetry and it does many interesting things,” The Verge quoted Nadella as saying.

Xiaoice interacts in text conversations but now the company has started allowing the chat bot to call people on their phones. The bot does not work exactly like Google Duplex, which uses the Assistant to make calls on a user’s behalf but it holds a phone conversation with the user. “One of the things we started doing earlier this year is having full duplex conversations. So now Xiaoice can be conversing with you in WeChat and stop and call you. Then you can just talk to it using voice,” Nadella was quoted as saying.

Humans will be humans and the latest victim of humankind was Microsoft. Two years ago, Microsoft launched an artificial intelligence (AI)-powered bot on Twitter, named Tay, for a playful chat with people, only to silence it within 24 hours as users started sharing racist and offensive comments with the bot.

Launched as an experiment in “conversational understanding” and to engage people through “casual and playful conversation”, Tay was soon bombarded with racial comments and the innocent bot repeated those comments back with her commentary to users. Some of the tweets had Tay referring to Hitler, denying the Holocaust, and supporting Donald Trump’s immigration plans, among others.

Later, a Microsoft spokesperson confirmed to TechCrunch that the company is taking Tay off Twitter as people were posting abusive comments to her.

Get live Stock Prices from BSE and NSE and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know market’s Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

Go to Top