SELECT LANGUAGE BELOW

Scientists Develop False Illness, AI Chatbots Quickly Disseminate Health Misinformation

Scientists Develop False Illness, AI Chatbots Quickly Disseminate Health Misinformation

Researchers Create Fake Disease, AI Chatbots Embrace It

A group of researchers invented a fictitious medical condition known as “bixonimania,” published a deceptive paper about it, and observed as a prominent AI chatbot began to suggest it as a genuine health issue to users seeking medical help.

According to a report from Nature, Swedish researchers have highlighted a significant vulnerability in AI systems through this experiment. Almira Osmanovic-Thunström at the University of Gothenburg led the project, which underscores how easily large language models can propagate medical misinformation.

Bixonimania, a completely made-up eye condition supposedly resulting from excessive blue light exposure from screens, came into existence on March 15, 2024, when Osmanovic-Thunström shared two blog posts about it on Medium. In late April and early May, she released two preprint papers on the academic platform SciProfiles, attributing them to a fictional researcher named Razliv Izgubrjenović whose images were created using AI technology.

The researchers intentionally included obvious indicators in the bogus paper to signal its fraudulent nature. Izgubrjenović was said to be affiliated with a non-existent institution called Asteria Horizon University in Nova City, California. The paper’s acknowledgments humorously thanked fictional entities such as Professor Maria Bohm of Starfleet Academy and sourced funding from the Professor Sideshow Bob Foundation, among others. It even had clear declarations stating that the entire study was a hoax, mentioning that 50 non-existent individuals were recruited for it.

Yet, despite these blatant signs, leading AI chatbots quickly started to treat Bixonimania as a real medical issue. By April 13, 2024, Copilot on Microsoft Bing labeled the condition as interesting and somewhat rare. On the same day, Google’s Gemini informed users that Bixonimania resulted from excessive blue light exposure and suggested they visit an eye specialist. Later, both Perplexity AI and ChatGPT from OpenAI discussed the disease’s prevalence and helped users assess if their symptoms corresponded with the fictional ailment.

Osmanovic-Thunström explained that their aim was to create a medical condition not present in existing databases. The term Bixonimania was chosen for its absurdity and to clarify that it doesn’t align with any actual eye conditions, as “mania” is a psychiatric term. She wanted to ensure medical professionals grasped that the symptoms were entirely fabricated.

The situation didn’t end with AI chatbots disseminating false information. Some researchers cited this fictitious study in legitimate peer-reviewed literature without critically examining the sources. For instance, a journal associated with Springer Nature published that Bixonimania is a new type of periorbital melanosis linked to blue light exposure. Upon realizing the issue, the journal withdrew the paper on March 30, 2026, acknowledging that its reliance on irrelevant references, including one to a nonexistent disease, compromised its validity.

Alex Luani, a doctoral student researching health misinformation at University College London, described the experiment as an insightful demonstration of how misinformation spreads. “While it may sound absurd, it raises a serious issue,” Luani remarked. “If the scientific process and its supporting systems can’t effectively catch these errors, we might be in trouble,” he noted.

AI companies had varied reactions to this incident. An OpenAI representative mentioned that the latest versions of their models are considerably better at providing reliable medical information, suggesting that earlier research does not reflect current capabilities. A spokesperson from Google acknowledged the limitations within generative AI and emphasized that Gemini advises users to consult qualified professionals regarding medical inquiries. Microsoft did not provide a comment on the matter.

Before launching the experiment, Osmanovic-Thunström sought guidance from an ethics advisor and intentionally chose low-risk conditions to minimize potential harm. David Sundemo, a doctor researching AI in healthcare at the University of Gothenburg, recognized the study’s controversial nature but deemed it valuable. “From my standpoint, introducing misinformation like this carries ethical costs, but it’s worth it,” Sundemo stated.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News