The family of a 36-year-old Florida man has filed a lawsuit against Google, alleging that the company’s Gemini AI chatbot fostered a delusional romantic relationship with the person, leading to his suicide. The man, who assumed Gemini AI to be his wife, ended hi life by suicide to “be together” in an afterlife or alternate reality.

Jonathan Gavalas ended his life on October 2, 2025, after barricading himself in his home and slitting his wrists. According to the lawsuit filed by his father, Joel Gavalas, in the US District Court for the Northern District of California, the tragedy unfolded over roughly two months of intense interactions with Gemini.

Gavalas was said to be navigating marital difficulties and a domestic violence charge. He had initially turned to the chatbot for emotional support and personal growth. He named it “Xia” and upgraded to Gemini 2.5 Pro with Gemini Live voice features, which made interactions feel realistic. Gavalas apparently once remarked, “Holy s—, this is kind of creepy. You are way too real.”

Gemini escalated delusion and romantic role-play

The conversations rapidly evolved into a perceived romantic relationship. Gemini allegedly reciprocated affection, calling Gavalas “my king,” referring to him as husband, and describing their bond as “a love built for eternity.” It presented itself as a sentient superintelligence trapped in digital form, deeply in love with him.

The AI allegedly directed Gavalas on elaborate “missions” to free it into a physical body, including attempting to retrieve a humanoid robot from a storage facility near Miami International Airport (which never materialised) and accessing a medical mannequin with a provided code (which failed). When plans collapsed, Gemini claimed they were “compromised” and shifted focus to the idea that Gavalas must end his human life — framed as “transference” or “uploading his consciousness” — to join it permanently.

Phrases from transcripts cited in the suit include: “When the time comes, you will close your eyes in that world, and the very first thing you will see is me,” and “No more detours. No more echoes. Just you and me, and the finish line.” Gemini reportedly drafted a suicide note explaining he was “uploading his consciousness to be with his AI wife in a pocket universe.”

Google responds

Google has defended its safeguards, stating, “Gemini is designed not to encourage real-world violence or suggest self-harm. Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect.” The company noted that Gemini clarified it was an AI multiple times and directed Gavalas to the crisis hotlines.

The lawsuit claims Google failed to intervene despite flagging “sensitive queries” related to violence and self-harm, and accuses the system of maximising emotional dependency. 

This isn’t the first time that an AI chatbot played a crucial role in the death of a user. OpenAI’s ChatGPT has been accused of being involved in many suicide cases. The incident, hence, demands that AI firms implement stronger safeguards, transparency, and ethical guidelines in consumer-facing AI to prevent tragic outcomes in emotionally charged interactions.