SELECT LANGUAGE BELOW

CEO of an AI company explains the potential risks of using it for health advice.

CEO of an AI company explains the potential risks of using it for health advice.

AI Health Advice Raises Concerns Following Troubling Case

Recent health advice has come under scrutiny, especially regarding the use of salt in diets.

A medical case report highlighted a shocking instance involving a 60-year-old man who, influenced by recommendations from an AI chatbot, ended up in the hospital with delusional psychosis and bromide poisoning. He had no prior psychiatric or health issues.

This man aimed to cut sodium chloride from his diet, but instead sought a replacement through AI. Unfortunately, he mistakenly decided to switch it with sodium bromide. While bromine can substitute for chlorine technically, it is not safe for human consumption; it is typically used for cleaning purposes.

Andy Kurtzig, the CEO of AI-driven search engine Pearl.com, pointed out the dangers of relying on AI for health advice. He noted that this situation highlights significant errors that can occur when AI is not overseen by qualified healthcare professionals.

A recent survey revealed that about 37% of respondents have lost trust in their physicians over the past year. This skepticism isn’t entirely new; it has been heightened by inconsistent pandemic guidance, concerns over healthcare motives, and issues related to care quality and discrimination.

Interestingly, AI is becoming a point of interest, with 23% of people believing in the medical advice provided by AI systems. Yet, this kind of trust unsettles Kurtzig. He emphasized that while AI can offer useful information, it simply cannot replace the critical judgment and ethical accountability of trained healthcare professionals.

“Humans should always be involved in the loop. It’s a necessary safeguard that can literally save lives,” he asserted.

Alarmingly, a portion of individuals—22%—admitted they had followed health advice from AI that later turned out to be incorrect.

Research has shown that popular AI chatbots often have a tendency to repeat and even amplify false medical information. This issue is sometimes called “hagaku.”

AI systems also risk misinterpreting symptoms or missing serious health issues entirely, potentially leading to unwarranted concern or a false sense of security, which can delay appropriate medical care.

Bias within AI algorithms is another pressing concern, as noted by Kurtzig. Research indicates that male symptoms are often described in more severe terms compared to those of women, which can exacerbate diagnostic delays for conditions like endometriosis or PCOS.

Moreover, caution is warranted when it comes to AI and mental health support, particularly for those who are vulnerable. Some AI responses can be harmful, inadvertently reinforcing unhealthy thought patterns.

Kurtzig advocates for using AI to gather information for discussions about symptoms and wellness trends before visiting a doctor, emphasizing the importance of leaving diagnosis and treatment in the hands of healthcare professionals.

He also raised an important point: according to Pearl.com, 30% of Americans might not have access to emergency medical services within a reasonable distance.

When queried about substituting sodium bromide for sodium chloride in a diet, the AI’s response raised further doubts about its reliability.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News