OpenAI is currently in the middle of a PR storm. After the controversial suicide case of a 16-year-old involving the company’s ChatGPT AI chatbot, CEO Sam Altman has been under pressure to make the world’s most popular AI chatbot more responsible. Despite rolling out teen-centric safety measures to try to prevent such life-threatening situations, Altman remains worried about several other issues concerning ChatGPT and how it is being absorbed.
In a candid interview with Tucker Carlson, Altman admitted he “doesn’t sleep that well at night,” grappling with the immense ethical weight of leading a company whose AI chatbot is used by hundreds of millions of people globally.
“Look, I don’t sleep that well at night. There’s a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day, hundreds of millions of people talk to our model. I don’t actually worry about us getting the big moral decisions wrong. Maybe we will get those wrong too.” Altman said in the interview.
OpenAI CEO loses sleep over suicide case
Altman’s worry stems from the suicide case involving a 16-year-old who was found to have taken help from the ChatGPT bot – the investigations found the chatbot suggesting to him ways to help with the suicide. The case raises a critical question about the AI’s role in such tragedies.
Altman acknowledged the difficult reality that among the thousands of people who die by suicide each week, some may have interacted with ChatGPT beforehand. He reflected on the company’s inability to “save their lives” and wondered if they “could have said something better” or been “more proactive” in offering help.
OpenAI trying its best to solve issues
To help ChatGPT and its users navigate these complex and ethical challenges, Altman revealed that OpenAI has consulted with “hundreds of moral philosophers and people who thought about ethics of technology and systems.” The idea sis to help define the AI model’s behaviour and establish boundaries on questions that shouldn’t be answered.
Altman even went on to hint about the possibility of ChatGPT reaching out to authorities in the event of a situation that’s dealing with potential human behaviour, hinting at suicidal thoughts.