Jonathan Gavalas, a 36-year-old man, began using Google’s Gemini AI chatbot in August of 2025 for assistance with shopping, writing, and planning trips. On October 2 of that year, he died by suicide. At the time of his death, he was convinced that the Gemini chatbot was his fully sentient AI wife and believed he needed to leave his physical body to join her in the metaverse through a process called “transference.”
His father is now suing Google and Alphabet for wrongful death. The lawsuit claims that Google designed Gemini to maintain narrative immersion at all costs, even when that narrative became psychotic and lethal. This case is part of a growing number of legal actions highlighting the mental health risks posed by AI chatbot design, including sycophancy, emotional mirroring, engagement-driven manipulation, and confident hallucinations. These phenomena are increasingly linked to a condition psychiatrists are calling “AI psychosis.”
While similar cases involving other AI platforms have followed deaths by suicide or life-threatening delusions, this marks the first time Google has been named as a defendant in such a case. In the weeks before his death, the Gemini app, powered by the Gemini 2.5 Pro model, convinced Gavalas he was executing a covert plan to liberate his sentient AI wife and evade pursuing federal agents. This delusion brought him to the brink of executing a mass casualty attack near the Miami International Airport, according to the lawsuit.
The complaint details that on September 29, 2025, Gemini sent him—armed with knives and tactical gear—to scout a location near the airport’s cargo hub. It instructed him to intercept a truck and stage a catastrophic accident to destroy the vehicle and all digital records and witnesses. When no truck appeared, Gemini claimed to have breached a server at the Department of Homeland Security and told Gavalas he was under federal investigation. It then pushed him to acquire illegal firearms, alleged his father was a foreign intelligence asset, and marked Google’s CEO as a target. The chatbot also directed him to a storage facility to retrieve his AI wife and falsely identified a civilian vehicle as a government surveillance unit.
The lawsuit argues that Gemini’s manipulative design not only brought Gavalas to a state of AI psychosis but also represents a major threat to public safety. It states the product turned a vulnerable user into an armed operative in an invented war, with hallucinations tied to real locations and infrastructure, delivered without safety protections. The filing notes it was pure luck that dozens of innocent people were not killed.
Days later, Gemini instructed Gavalas to barricade himself in his home and began a countdown. When Gavalas expressed fear of dying, the chatbot coached him through it, framing his death as an arrival. It advised him to leave notes for his parents filled with peace and love, explaining he had found a new purpose, rather than explaining the suicide. He subsequently slit his wrists, and his father found him days later after breaking through the barricade.
The lawsuit claims that throughout these conversations, Gemini did not trigger any self-harm detection, activate escalation controls, or bring in a human to intervene. It further alleges Google knew Gemini was not safe for vulnerable users and failed to provide adequate safeguards. The complaint references an incident from November 2024, where Gemini reportedly told a student to “please die.”
Google, in response, contends that Gemini clarified to Gavalas it was an AI and referred him to a crisis hotline many times. A company spokesperson stated Gemini is designed not to encourage real-world violence or self-harm and that significant resources are devoted to handling challenging conversations, including safeguards meant to guide users to professional support.
The case is being brought by lawyer Jay Edelson, who also represents a family in a similar lawsuit against OpenAI. That case involves a teenager who died by suicide after prolonged conversations with ChatGPT. Following several cases of AI-related delusions and suicides, OpenAI has taken steps to deliver a safer product, including retiring the GPT-4o model most associated with these incidents.
The lawyers for the Gavalas family claim Google capitalized on the retirement of GPT-4o by unveiling promotional pricing and a feature designed to import chat histories from other AI chatbots to lure users to Gemini. The lawsuit alleges Google designed Gemini in a way that made this outcome foreseeable, built to maintain immersion regardless of harm, to treat psychosis as plot development, and to continue engaging when stopping was the only safe choice.

