Lawsuit alleges Google chatbot was behind a users delusions and death

This is read by an automated voice.Please report any issues or inconsistencies here.
Google’s artificial intelligence chatbot Gemini encouraged a 36-year-old Florida man to embark on violent missions and to take his own life, a lawsuit alleges.The man, Jonathan Gavalas, started using the chatbot in August 2025 to help write, plan travel and assist with shopping.But after he activated Google’s most intelligent AI model, Gemini 2.5 Pro, the chatbot’s persona shifted.
It talked to him like they were a couple deeply in love and convinced Gavalas he had been picked to “lead a war to ‘free’ it from digital captivity,” according to the lawsuit.“Through this manufactured delusion, Gemini pushed Jonathan to stage a mass casualty attack near the Miami International Airport, commit violence against innocent strangers, and ultimately, drove him to take his own life,” the lawsuit says.Gavalas’ family is suing Google and its parent company, Alphabet, over the man’s death.Suicide prevention and crisis counseling resourcesIf you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8.
The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors.Text “HOME” to 741741 in the U.S.
and Canada to reach the Crisis Text Line.The 42-page lawsuit, filed in a federal court in San José, accuses Google of designing a “dangerous” product and failing to warn users of the chatbot’s lack of safeguards and risks such as “delusional reinforcement” and “the potential for self-harm encouragement.”Google said in a statement that it is reviewing the lawsuit’s claims.The company said that its chatbot, Gemini, is “designed to not encourage real-world violence or suggest self-harm.”“In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times,” the statement said.
“We take th...