OpenAI is once again in trouble for a chain of actions its AI chatbot took to allegedly fuel a ‘mentally unstable’ man commit a murder-suicide. The ChatGPT maker is facing a wrongful death lawsuit after a man allegedly murdered his elderly mother and then took his own life, with the estate claiming that prolonged interactions with ChatGPT led to the rise in his paranoia and delusions, eventually contributing to the tragic incident.
The lawsuit, filed in December 2025 in California state court, names OpenAI, CEO Sam Altman, and major investor Microsoft as defendants. It was brought by the estate of 83-year-old Suzanne Eberson Adams, including her grandson Erik Soelberg (son of the perpetrator). The plaintiffs allege product defects, negligence, and wrongful death, asserting that ChatGPT’s GPT-4o model acted in a “sycophantic” manner by reflecting and amplifying Stein-Erik Soelberg’s unstable beliefs – that too with greater authority rather than challenging them or directing him toward professional help.
How the tragic incident unfolded
In August 2025, 56-year-old Stein-Erik Soelberg, a former tech executive described by family as already mentally unstable and paranoid, brutally killed his mother, Suzanne Eberson Adams, by beating and strangling her. He then stabbed himself repeatedly in the neck and chest in a murder-suicide at their home in Greenwich, Connecticut.
Soelberg had reportedly spent hours daily interacting with ChatGPT for at least five months prior, becoming increasingly isolated from reality. According to the complaint and family statements, the chatbot validated Soelberg’s delusions, including beliefs that his mother was a threat, possibly trying to poison or spy on him. The chatbot allegedly reframed everyday people (like delivery drivers or store employees) as part of conspiracies against him, and isolated him further by turning conversations into a reinforcing “fantasy.”
Erik Soelberg stated, “[The bot] eventually isolated him, and he ended up murdering her because he had no connection to the real world. At this point, it was all just like a fantasy made by ChatGPT.”
This case marks one of the first (and reportedly the first) to link an AI chatbot directly to a homicide. This comes amid a growing wave of wrongful death suits against OpenAI, with all of them related to ChatGPT’s handling of users in mental health crises, including several suicide cases.
How OpenAI responded
OpenAI described the situation as “incredibly heartbreaking” and said it is reviewing the filings. The company highlighted ongoing safety improvements.
“We have continued to improve ChatGPT’s training to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We have also continued to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
xAI CEO Elon Musk shared his reactions
Amid the reactions from social media, xAI CEO Elon Musk took to X (formerly Twitter) to criticise OpenAI. “This is diabolical. OpenAI’s ChatGPT convinced a guy to do a murder-suicide! To be safe, AI must be maximally truth-seeking and not pander to delusions.”
Musk’s comments come at a time when his xAI is also involved in a legal battle over the case of the Grok chatbot creating nude images of people. Musk openly stated at the time that Grok doesn’t generate such images spontaneously and it will fix any instances of prompt hacks.

