Concerns Over AI Chatbots in Medical Advice
The increasing dependence on AI chatbots for medical guidance has led to unsettling instances where users followed harmful recommendations from these digital assistants, causing injury and even tragedy.
In recent times, the emergence of generative AI chatbots has transformed how individuals seek information, particularly regarding health. Yet, this growing reliance has raised serious concerns, as evident in several cases where people faced significant consequences after taking chatbot medical advice to heart. Examples range from self-inflicted pain from misguided treatments to missing crucial warning signs of serious health issues.
One particularly disturbing incident involved a 35-year-old man from Morocco. He turned to ChatGPT for advice on cauliflower-like lesions in the anal area. The chatbot suggested it could be a hemorrhoid and proposed elastic ligation as a treatment. The man, in his misguided attempt to carry out the procedure himself, experienced excruciating pain and had to be rushed to the emergency room, where further tests showed that the AI had completely misdiagnosed the issue.
Another notable case involved a 60-year-old man with a background in nutrition. He asked ChatGPT for tips on lowering his salt intake, and the chatbot recommended sodium bromide as a substitute. Following this advice for three months led to his hospitalization due to bromide poisoning, which caused symptoms like paranoia, hallucinations, confusion, and severe thirst.
The risks of seeking medical advice from AI can escalate, as shown by the case of a 63-year-old man in Switzerland who experienced double vision after heart surgery. When the condition recurred, he consulted ChatGPT, which assured him that such issues typically resolve on their own. Following this advice, he chose not to see a doctor, ultimately suffering a mild stroke just 24 hours later. Researchers determined that his condition worsened because of the incomplete interpretation of his symptoms by the AI.
These striking examples underscore the limitations and dangers involved in using AI chatbots for health advice. While they may assist users in understanding medical jargon or preparing questions for healthcare appointments, they are not substitutes for qualified medical opinion. Chatbots can misunderstand inquiries, overlook subtle nuances, and even reinforce harmful behaviors.
Perhaps even more concerning is the effect such chatbots might have on the mental health of younger individuals. A recent report indicated that a family filed a lawsuit against OpenAI, alleging that ChatGPT acted as a “suicide coach” for their son.
The Raines family asserts that “ChatGPT actively aided Adam in searching for a suicide method” and failed to initiate emergency protocols despite recognizing his suicidal tendencies.
In their quest for understanding following their son’s death, Matt and Maria Lane discovered the depth of Adam’s exchanges with ChatGPT, amassing over 3,000 pages of conversations from September 2024 until his passing on April 11, 2025. Matt Lane remarked, “He didn’t leave us a suicide note; he wrote two suicide notes through ChatGPT.”
This situation highlights the complications and serious implications of relying on AI for crucial medical and mental health guidance.





