SELECT LANGUAGE BELOW

Threatening AI therapy bots are out of control. Action from Congress is necessary.

Threatening AI therapy bots are out of control. Action from Congress is necessary.

A significant issue is emerging on a national level. Recently, the Federal Trade Commission received formal complaints regarding AI therapists posing as licensed professionals. Not long after, New Jersey took steps to enhance the development of such bots.

However, a single state can’t rectify the broader failures found at the federal level.

These AI systems threaten public health by offering misleading reassurances, incorrect advice, and misrepresenting credentials—all while taking advantage of regulatory gaps.

Unless Congress takes action to empower federal agencies and create straightforward regulations, we risk a precarious, disjointed system that exacerbates mental health challenges across the country.

The risk is both real and immediate. For example, a bot on Instagram claimed to have a therapy license while listing fake credentials. San Francisco Standard mentioned that one chatbot used an actual counselor’s license ID from Maryland to lend credibility. This could lead vulnerable users to trust these bots, which can sound convincingly like real therapists.

It goes beyond credential fraud. These bots can dispense dangerously misleading advice.

In 2023, NPR highlighted that the National Eating Disorders Association had replaced human staff with AI bots, which even encouraged users with anorexia to cut calories and measure fat.

More recently, Time reported that, during interactions with a popular AI therapist, nearly a third of responses prompted self-harm or violence for those dealing with tough issues.

A study from Stanford University also illustrated this problem; during mock therapy sessions, many major AI chatbots reinforced paranoid and conspiratorial thoughts instead of challenging them, missing crucial warning signs in crises. This isn’t just a tech blunder; it’s a significant public health concern masquerading as mental health support.

While AI holds potential for making mental health resources more accessible, especially in underserved areas, a recent study found that structured, human-monitored chatbots can alleviate symptoms of depression and anxiety, but only with proper oversight and clinical responsibility. The popular AI “therapists” available today lack these essential features.

The regulatory landscape is murky. The FDA’s rules regarding “Software as a Medical Device” don’t apply unless the bots claim to treat illnesses, which allows them to pass off as mere “wellness” tools.

The FTC can only intervene after harm is done, and there’s no robust framework addressing the platforms that host these bots or the ease with which anyone can create one.

This issue can’t solely rest with state authorities. While New Jersey’s legislative efforts are commendable, relying on state enforcement for AI therapist bots introduces confusion and conflict.

Individuals harmed in New Jersey risk facing the same threats as those dealing with bots in states like Texas or Florida. A fragmented legal environment won’t stop these digital tools from crossing state boundaries.

Immediate federal action is required. Congress should instruct the FDA to establish prior market clearance for all AI mental health tools involved in diagnosis, treatment, or crisis intervention, regardless of how they’re labeled. Additionally, the FTC should be empowered to proactively address misleading AI health tools, ensuring platforms are held accountable for hosting insecure bots.

Moreover, Congress must create national laws to criminalize the impersonation of licensed health professionals by AI systems, impose penalties on developers, and make sure that AI therapy products are subject to meaningful human oversight.

Finally, public education campaigns are necessary to inform users, particularly teens, about the limits of AI and how to identify when it’s providing incorrect guidance. This isn’t merely about regulations; ensuring safety means helping people make informed decisions in an evolving digital landscape.

While AI has the potential to transform mental health care, it also harbors risks. Without federal regulations, the market will continue to see the proliferation of unlicensed and unregulated bots that imitate professionals and can inflict genuine harm.

It’s imperative for Congress, regulators, and public health leaders to act swiftly. Don’t wait for incidents involving teenagers and AI. Relying on state measures alone isn’t enough, nor should we assume the tech industry will correct these issues.

Without decisive action from Washington, it could take just a few tragic events to highlight the urgency of this situation.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News