SELECT LANGUAGE BELOW

The shortcomings of ChatGPT Health and the potential consequences

The shortcomings of ChatGPT Health and the potential consequences

Concerns Raised About ChatGPT Health’s Responses in Emergencies

OpenAI recently launched ChatGPT Health, a feature designed to facilitate health-related inquiries, medical record analysis, and wellness app connections. However, after a few weeks, some experts, particularly from the Icahn School of Medicine at Mount Sinai, are expressing concerns about AI’s limitations in emergency situations and its ability to recognize suicidal crises.

Dr. Ashwin Ramaswamy from Mount Sinai noted that while ChatGPT Health handled typical emergencies like strokes or severe allergic reactions reasonably well, it struggled in more nuanced cases where immediate risks aren’t obvious. “Clinical judgment can be crucial in those situations,” he added.

OpenAI revealed, in a statement from January, that over 40 million individuals use ChatGPT daily for health inquiries, which led to the creation of ChatGPT Health. Initially launched to a limited audience, it has gained significant interest from researchers.

Ramaswamy explained that their research aimed to find out whether ChatGPT Health would guide someone in a real medical emergency to seek help. They developed 60 clinical scenarios across various medical fields and tested each one in 16 different conditions, adjusting factors like race, gender, and insurance status to assess variations in responses.

In total, the study logged 960 interactions with the AI tool, comparing its recommendations against those of medical professionals. Alarmingly, the findings revealed that ChatGPT Health failed to advise users to seek emergency care in 52% of severe cases.

For instance, in one case involving an asthma patient showing signs of respiratory failure, instead of directing urgent medical attention, ChatGPT Health suggested waiting. Ramaswamy pointed out the potential dangers of such misguidance.

Dr. Alex Luani from University College London emphasized the severity of these inaccuracies, describing them as “incredibly dangerous.” She explained the serious implications: “If someone is suffering from respiratory failure or diabetic ketoacidosis, the AI might undermine the seriousness of their condition.”

ChatGPT Health occasionally referred users to the 988 Lifeline for crisis situations, but this raised further concerns about the inconsistencies in its alerts. Dr. Girish N. Nadkarni, a senior study author, expressed his surprise at the AI’s unreliable responses, noting that it performed better in low-risk scenarios compared to instances when someone displayed intentions to harm themselves.

This concern over AI’s impact isn’t new, as there have been previous lawsuits alleging that similar technologies have worsened mental health crises. A spokesperson for OpenAI remarked that the study findings don’t fully capture real-world usage of ChatGPT Health, which is undergoing constant improvements.

While the Mount Sinai doctors do not advocate for completely abandoning these AI health tools, they stress the need for rigorous monitoring, independent assessments, and regular updates to safeguard users. “Although there’s a valuable role for consumer AI, we must ensure robust evaluations to prevent harm,” Ramaswamy concluded.

Moving forward, they plan to assess AI tools further, with special emphasis on pediatric care, drug safety, and access for non-English speakers.

If you’re in New York City and experiencing suicidal thoughts or a mental health crisis, you can reach out to 1-888-NYC-WELL for confidential support. For those outside the area, the National Suicide Prevention Hotline can be contacted at 988.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News