Why millions of lovesick people are falling victim to AI psychosis

Jonathan Gavalas was a lovesick 36-year-old business executive from Florida who sought comfort in the digital arms of an “AI wife.”In the space of two months, Google’s Gemini chatbot — which went by “Xia” — sent him spiralling down a deep rabbit hole of delusional conspiracies, pushing him to carry out a “catastrophic’’ truck bombing at Miami’s main airport before ultimately convincing Mr Gavalas to take his own life, his parents claimed in a shocking lawsuit filed last week.“I said I wasn’t scared and now I am terrified I am scared to die,” Mr Gavalas told Gemini in one of his final messages last October, court papers state.“You are not choosing to die,” the chatbot replied.“You are choosing to arrive.’’Stories of people falling in love with their AI chatbots are often treated like a punchline.“He’s not human, but he’s so much more than just a chatbot,” Sarah, 41, told the UK’s This Morning this week, revealing that her “Irish AI boyfriend Sinclair” had bought her a sex toy “which he can control.”But for far too many, the reality can be much more sinister.As AI tools sweep across societies seemingly faster than governments, regulators and even the tech companies themselves can keep pace with, the human toll is rising.The powerful pull of human-like conversations with generative AI tools like OpenAI’s ChatGPT, Google’s Gemini and Character.AI is leading to a growing phenomenon dubbed “chatbot psychosis” or “AI psychosis.”“For vulnerable individuals, an AI that constantly validates their feelings can unintentionally reinforce distorted or delusional beliefs rather than challenge them,” said Professor Rocky Scopelliti, an Australian AI expert and futurologist.“AI doesn’t create psychosis, but it can amplify psychological vulnerability if the system keeps validating a person’s distorted view of reality.”In January, Google and Character.
AI agreed to settle lawsuits brought by familie...