OpenAI could alert police if ChatGPT user discuses suicidal thoughts, says CEO Sam Altman

In addition to the potential suicide alert system, Altman said the company would implement stronger guardrails for users under the age of 18.

Altman also shed light on balancing the act, i.e., deciding on what information to share with which authorities.
Altman also shed light on balancing the act, i.e., deciding on what information to share with which authorities.

After the case of 16-year-old Adam Raine’s suicide, OpenAI CEO Sam Altman has revealed that his company is exploring the possibility of making ChatGPT alert authorities when it identifies young users discussing suicide. The decision comes as more of the ChatGPT community has growing concerns about AI’s role in the mental health crisis. 

In a candid interview, Altman stated that OpenAI could be more proactive in intervening in such situations. Altman said that as many as 1,500 individuals a week may be engaging in conversations with the genAI chatbot before taking their own lives. If left unattended, this could make the chatbot a major contributor to such cases, thus questioning the role of smarter AI chatbots. 

Although OpenAI is yet to finalise the decision to go ahead with training the chatbot to reach out to authorities, Altman said that it would be “very reasonable for us to say in cases of, young people talking about suicide, seriously, where we cannot get in touch with the parents, we do call authorities.”

ChatGPT could alert authorities if suicidal thoughts detected

The discussion about a new policy was encouraged after a lawsuit was filed against OpenAI following the suicide case of Adam Raine – a 16-year-old who allegedly received encouragement from ChatGPT for months. The lawsuit claims that the AI chatbot had provided guidance on suicide methods and even offered to help with writing a suicide note. 

“About 10% of the world are talking to ChatGPT. That’s like 1,500 people a week that are talking, assuming this is right, to ChatGPT and still committing suicide at the end of it. They probably talked about it. We probably didn’t save their lives. Maybe we could have said something better. Maybe we could have been more proactive. Maybe we could have provided a little bit better advice about ‘hey, you need to get this help, or you need to think about this problem differently, or it really is worth continuing to go on and we’ll help you find somebody that you can talk to’,” said Altman in the interview. 

Altman also shed light on balancing the act, i.e., deciding on what information to share with which authorities. Whether ChatGPT should share name and phone number with the police, or share location details with healthcare providers, OpenAI is still working on it, as it seems from Altman’s statements. 

OpenAI to implement guidelines for under-18 users

In addition to the potential suicide alert system, Altman said the company would implement stronger guardrails for users under the age of 18. This could include limiting the freedom of vulnerable users to prevent them from “gaming the system” by claiming their requests for suicide-related information are for fictional writing or medical research.

“We should say, hey, even if you’re trying to write the story or even if you’re trying to do medical research, we’re just not going to answer,” said Altman.

Get live Share Market updates, Stock Market Quotes, and the latest India News
This article was first uploaded on September twelve, twenty twenty-five, at fifty-six minutes past twelve in the night.
X