A Disturbing Trend in Medical Advice
A recent study reveals that many people tend to trust medical guidance from artificial intelligence more than that from human doctors, despite the fact that AI often presents inaccurate information.
Involving 300 participants, the research assessed responses generated by doctors, online medical platforms, and AI systems like ChatGPT. Participants were asked to evaluate which responses they deemed the most reliable. This study, conducted by Massachusetts Institute of Technology researchers, was published in the New England Journal of Medicine.
The findings indicated that both medical professionals and casual users rated AI-generated responses as more accurate, reliable, and complete compared to those from human doctors. Interestingly, neither group could reliably differentiate between answers from AI and those from human practitioners.
Furthermore, participants were introduced to a less reliable AI source without knowing its lower accuracy. Strikingly, they considered these incorrect AI responses to be valid and satisfactory, indicating a troubling trend where people might follow potentially harmful medical advice. This could lead to unnecessary medical visits as a consequence of misplaced trust.
There are alarming instances where AI has given harmful recommendations. For example, an unidentified man in Morocco was told by a chatbot to apply a rubber band to his hemorrhoids, leading to a hospital visit. In another tragic case, a 60-year-old man took his own life after being advised by ChatGPT to consume sodium bromide, which is more commonly used for pool disinfection. This man experienced severe paranoia and hallucinations and was hospitalized for three weeks prior to his death, as detailed in a case study published earlier this year.
Dr. Darren Rebel, who leads spine surgery research at a New York hospital, noted, “The problem is, what they’re getting from their AI programs is not necessarily real scientific recommendations backed by actual publications.” He even pointed out that a significant portion of the information provided by these systems was fabricated.
A survey from Censuswide found that about 40% of respondents place their trust in medical advice from AI models like ChatGPT. This raises serious concerns about the reliance on AI for health advice in a world where accurate medical guidance is crucial.





