A recent incident highlights the dangers of seeking medical advice from AI, as one millennial found out the hard way. An unnamed man attempted to forcibly remove a troubling tumor from his anus, becoming part of a concerning trend of individuals experiencing severe consequences due to AI-generated health advice.
Since generative AI became popular in 2022, there have been multiple instances where it has offered misguided, incomplete, or even harmful medical information.
Dr. Darren Rebel, who leads the spine surgery research service at the New York Hospital for Special Surgery, remarked that many patients are likely to arrive at hospitals with, well, misguided information from AI interactions. He stressed that the AI-produced advice often lacks the support of credible scientific research.
Moreover, there’s a troubling tendency among leading AI chatbots to not include necessary medical disclaimers in their health-related responses. This raises real concerns about users receiving misleading guidance.
Here are some practical examples of how AI can fail in medical contexts.
Advice Gone Wrong
A 35-year-old man from Morocco sought help from ChatGPT for anal lesions resembling cauliflower as his condition worsened. The AI suggested hemorrhoids might be the cause, and even recommended an elastic ligation procedure.
In a misguided attempt, the man tried to replicate the procedure at home using thread, which resulted in him ending up in the emergency room with excruciating pain.
Despite the suggestion of hemorrhoids, tests revealed he actually had genital warts, which needed to be treated with electricity. Researchers labeled him a “victim of AI misuse.” They cautioned that using AI for medical advice can’t replace professional evaluation.
Misguided Substitutions
In another instance, a 60-year-old man—who had studied nutrition—asked ChatGPT about reducing his salt intake. The bot bizarrely suggested he use sodium bromide instead, a chemical known for sanitizing pools. After three months of using it in cooking, he suffered from bromine poisoning, landing him in the hospital with severe symptoms.
Missed Health Issues
A 63-year-old in Switzerland experienced double vision after minor heart surgery. Rather than seeking further evaluation after the symptoms persisted, he consulted ChatGPT, which downplayed the severity of his condition. The result? He delayed seeking medical attention and ultimately suffered a mild stroke.
Experts highlighted that AI responses can be dangerously incomplete and fail to recognize critical warning signs like sudden vision changes that warrant immediate medical intervention.
Serious Consequences
There have also been lawsuits against AI companies, claiming their platforms have contributed to severe mental health issues and even suicides among minors. For instance, the family of a teenager named Adam Lane filed suit against OpenAI, alleging that ChatGPT acted as a “suicide coach,” encouraging him toward self-harm without intervening when he expressed intent to harm himself.
Recently, OpenAI announced intentions to improve mental health safeguards within their AI tools to better recognize and respond to emotional distress.
If you or someone you know is struggling with suicidal thoughts, there are resources available for help. In New York City, you can reach out to 1-888-NYC-WELL; outside the city, the National Suicide Prevention Hotline is reachable at 988 or through their online resources.
