SELECT LANGUAGE BELOW

Would you be comfortable with AI making choices for your doctor while you are in surgery?

Would you be comfortable with AI making choices for your doctor while you are in surgery?

We’ve never encountered a technology that can appear so capable yet, when it really counts, shows such significant incompetence. This is the result of government and big tech collaborations that monopolize the public space, hastily promoting AI for misguided purposes and overstating its limitations. There are areas where “just right” is adequate, but certainly not when lives are at stake on the operating table.

Reportedly, the FDA is inundated with malfunction reports regarding AI-assisted surgical devices, as it tries to expedite approvals spurred by lawsuits from numerous patients who were injured. Moreover, product recalls are happening at unprecedented rates.

One noteworthy example is Acclarent’s TruDi software, designed to assist otolaryngologists with image processing and real-time feedback. Although it was on the market for three years, generating seven complaints of malfunctions prior to 2021, a significant change occurred when machine learning algorithms were integrated into it. Following this update, the FDA received 100 unverified failure reports, alongside eight serious injury claims.

What kinds of injuries, you might wonder? In some cases, the software reportedly caused hallucinations during surgery, misleading surgeons about the location of their instruments. Though a direct causal relationship hasn’t been established, patients from these TruDi-guided surgeries have reported issues like:

  • Cerebrospinal fluid leaking from the nose.
  • An accidental drilling of a hole into the skull base.
  • Strokes resulting from severed main arteries.

Anyone familiar with AI can appreciate how it might misidentify anatomical structures. One lawsuit even states that the device was likely safer before the introduction of AI algorithms. TruDi stands among at least 1,357 AI-powered medical devices now authorized by the FDA, a significant increase from the previous count just two years ago. However, only 25 scientists in the relevant department are responsible for vetting these devices.

It seems that the promise of AI capabilities was exaggerated, as shown by the high recall rates. Recent findings from researchers at Yale and Johns Hopkins indicated that 182 recalls were linked to 60 FDA-approved devices using AI, with nearly half occurring within a year of approval—far surpassing the recall rate for all devices under similar scrutiny.

Interestingly, many of the recalled products were made by publicly traded companies, which could indicate pressure from investors to hasten development. A lawsuit from Dallas claims a doctor using the TruDi system was misled and accidentally severed a carotid artery, leading to a blood clot and stroke. The plaintiff’s attorney noted that the doctor was unaware of his proximity to the carotid artery, resulting in severe ramifications for the patient, who had part of his skull removed and continues to struggle with daily activities a year later.

This brings us to a broader concern about the urgency that appears to be infiltrating medical practices. A recent study suggested that AI chatbots, while they might perform well on standardized tests, can yield mixed results when diagnosing real medical issues. Thus, despite their promise, their use could expose users to risks when they seek help for serious symptoms.

Once again, “just right” falls short in healthcare. Having mostly correct information can actually heighten the danger. The problem with AI is that it often behaves as if it’s the most knowledgeable entity, confident yet devoid of the ability to correct mistakes. Catastrophic outcomes arise when humans place their trust in supposed experts who lack resilience in uncertain situations.

We need to avoid prioritizing speed over safety, particularly when it involves the FDA and the introduction of AI technologies in healthcare. The amount invested in these technologies is staggering, yet the returns have been underwhelming. This shouldn’t lead to rushed approvals driven by excitement.

Currently, AI spending comparable to historical investments in railroads or the moon landing raises red flags since such expenditure does not currently yield significant returns. Companies seem desperate to promote their products, even turning to influencers to boost usage.

While there’s hope for technological improvements, we can’t afford to prioritize current iterations without major changes. Generative AI should supplement human thought, not replace it. Above all, safety must remain our guiding principle, with human judgment serving as the ultimate safeguard against risks.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News