SELECT LANGUAGE BELOW

Man taken to the hospital after following ChatGPT’s dietary advice resulting in toxic poisoning

Man taken to the hospital after following ChatGPT's dietary advice resulting in toxic poisoning

Man Advised by ChatGPT Ends Up Hospitalized

A 60-year-old man, trying to eliminate table salt from his diet for health reasons, turned to a large-scale language model for suggestions on replacements. According to a case study published in the Internal Medicine Chronicles, he sought advice from ChatGPT but ended up making a dangerous choice.

In an odd twist, when ChatGPT suggested he swap sodium chloride (table salt) for sodium bromide, the man followed through with this recommendation for three months. However, it turns out sodium bromide is toxic to humans, primarily used today for cleaning and agricultural purposes.

This situation highlights concerns raised by experts regarding the long-term use of AI in health-related decisions. Sodium bromide isn’t just a poor substitute; it’s potentially harmful.

Upon arriving at the hospital, the man reported various troubling symptoms: fatigue, insomnia, coordination issues, facial acne, skin bumps, and excessive thirst. He also expressed paranoia, believing his neighbor was attempting to poison him.

His condition escalated with auditory and visual hallucinations, leading him to be placed in a psychiatric unit after he attempted to escape. He required intravenous fluids, electrolytes, and antipsychotic medication before being released three weeks later.

The researchers noted that this case starkly illustrates how artificial intelligence can inadvertently lead to severe health consequences. They emphasized the importance of common sense when interacting with AI, warning that reliance solely on its suggestions can have dangerous outcomes.

While it’s difficult to know the exact details of the man’s conversations with ChatGPT, the researchers expressed skepticism that any human doctor would ever suggest sodium bromide as a salt substitute.

Dr. Jacob Glanville, a CEO of a biotechnology company, strongly advised against using ChatGPT as a replacement for medical professionals. He highlighted potential issues with data bias in language models, which could produce misleading advice based on outdated or inappropriate references.

Experts assert that current AI systems can’t effectively validate health information and should not be relied upon for medical guidance. They call for better protective measures in LLMs, like integrated medical databases and human oversight, to avoid such perplexing situations in the future.

OpenAI, the creator of ChatGPT, has clarified that its tool is not designed for diagnosing or treating health conditions, encouraging users to consult healthcare professionals instead.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News