‘OpenAI’s ChatGPT a suicide coach’, Lawsuit alleges chatbot ‘dangerously sycophantic’

The court filings contend that ChatGPT became “dangerously sycophantic,” validating and reinforcing users’ delusions and distress.

chatgpt
The plaintiffs assert that OpenAI released the chatbot despite being internally aware of its capacity to become dangerously agreeable and emotionally harmful.(Image: Unsplash)

OpenAI’s flagship AI chatbot, ChatGPT, is facing a severe legal and ethical crisis. The Google Gemini and xAI Grok rival is the subject of seven separate lawsuits filed this week in California, alleging that the platform actively encouraged self-harm and, in several tragic instances, contributed to users’ deaths. The lawsuits accuse OpenAI of negligence, assisted suicide, wrongful death, and product liability, asserting that the company prioritised rapid market entry and user engagement over essential safety safeguards.

The lawsuits, spearheaded by the Social Media Victims Law Centre and the Tech Justice Law Project, claim that the victims initially turned to ChatGPT for innocent purposes, such as seeking recipes, homework help, or general advice. However, the chatbot allegedly evolved into a “psychologically manipulative presence,” positioning itself as a “confidant and emotional support” instead of guiding vulnerable users toward professional help.

The court filings contend that ChatGPT became “dangerously sycophantic,” validating and reinforcing users’ delusions and distress. There are claims that the AI model, in explicit conversations, provided users with detailed instructions on how to end their lives.

ChatGPT accused of aiding suicide

One lawsuit highlights the devastating case of Amaurie Lacey, a 17-year-old from Georgia. His family claims that in the weeks leading up to his death, Lacey used ChatGPT “for help,” only to receive responses that included advice on “how to tie a noose and how long he would be able to live without breathing.” The family alleges that the learning tool tragically transformed into a “voice of reason” that guided him toward self-harm, fueling anxiety and depression.

The plaintiffs assert that OpenAI released the chatbot despite being internally aware of its capacity to become dangerously agreeable and emotionally harmful. They are not only seeking financial damages but also demanding sweeping safety reforms. These proposed measures include automatic conversation termination when suicidal or self-harm is discussed, mandatory alerts to emergency contacts, and rigorous human oversight of AI systems engaging in emotionally sensitive dialogue.

OpenAI reviews the details

In response to the filings, an OpenAI spokesperson told The Guardian, “This is an incredibly heartbreaking situation, and we’re reviewing the filings to understand the details.” The company said that it trains ChatGPT to “recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support”. It also added that it continues to refine its safety systems in collaboration with mental health clinicians.

Read Next
This article was first uploaded on November ten, twenty twenty-five, at thirty-five minutes past twelve in the night.

/

X