AI chatbots fall for fake diseases and phony studies as experts warn against taking tips from bots

Swedish researchers fed a fake medical diagnosis, along with phony scientific studies, into AI chatbots to see if they would fall for it – and they did.A team led by Almira Osmanovic Thunström at the University of Gothenburg cooked up a completely fraudulent eye condition called bixonimania — a ridiculous made-up ailment involving pinkish eyelids from too much screen time or eye-rubbing, to see if large language models (LLMs) would treat them as legitimate medical science.The researchers didn’t exactly hide the punchline.The phony 2024 scientific papers featured fictional authors, including a lead researcher named Lazljiv Izgubljenovic — which translates to “The Lying Loser” in Bosnian.His photo was AI-generated, just to drive the joke home.Acknowledgments also thanked “Professor Sideshow Bob” and a professor from the Starfleet Academy for access to a lab aboard the USS Enterprise.The experiment wasn’t meant as a flat-out “gotcha” on AI, but “rather a reflection of how humans have forgotten to be skeptical when presented information,” Osmanovic Thunström told The Post.She chose the name “bixonimania” because it “sounded ridiculous” and “I wanted to be really clear to any physician or medical staff that this is a made-up condition, because no eye condition would be called mania — that’s a psychiatric term.”ChatGPT, Google’s Gemini, Microsoft’s Copilot, and the rest happily swallowed the nonsense and started dishing out serious-sounding medical advice about bixonimania — warning users about pinkish eyelids, blue-light damage, and urging them to see an ophthalmologist for this entirely imaginary condition.It didn’t stop there.
Blog posts explaining bixonimania appeared on the website Medium, and somehow the fake papers even got cited in peer-reviewed literature.Articles about a disease that was never real, based on studies that were obviously a joke popped up on the academic sites and the social network SciProfil...