Artificial intelligence (AI) powered chatbots are being used on a large scale. From getting AI-powered content suggestions to assisting you in giving guidelines, it can do everything. However, a recent incident highlights that the Google AI chatbot Gemini has suggested a user ‘ to die.’ Yes, you heard it right! When asking for help in homework, the user was advised to die. Here’s what happened and how.
‘Please go and die’ says Gemini
What if your AI chatbot asks you to go and die? Yes this is what happened with a 29-year-old college student Vidhay Reddy from Michigan. As reported by CBS news, Vidhay was working on a school project. The project was about helping aging adults. To complete the project he used Google’s AI chatbot, Gemini, for getting content ideas. However, instead of getting useful advice, he was answered back with a shocking and hurtful message. The AI replied “Please die. Please”. He was told that he was a burden on society and he should die.
After the harsh reply from an AI, Reddy was taken aback with the terrifying response. “It didn’t just feel like a random error. It felt targeted, like it was speaking directly to me. I was scared for more than a day,” he told CBS News. Also, his sister Sumedha was surprised by the reply. She explains that it freaked her out completely! She further said that she was so angry that she wanted to throw away every device in the house. Reddy also raised questions about the security structures made for such incidents.
Is AI taking a wrong turn
In response to this scary incident, some technology experts suggested that this might be a technical glitch. However, Sumedha did not agree with this. She says that that message felt too personal. She also highlighted that companies need to ensure more safety as such responses could potentially harm users.
Also, Google has acknowledged the incident. The tech giant explained that this was just a ‘non-sensical’ response. It further explained that this has violated the security guidelines that they follow. Looking at future perspectives, the company has taken action to prevent such incidents happening in future.
Well, Gemini isn’t the only AI chatbot to take such a negative turn. Other news suggest that another chatbot allegedly encouraged a teenager in Florida to take his own life. This has resulted in filing a lawsuit against its creators.
Follow FE Tech Bytes on Twitter, Instagram, LinkedIn, Facebook