Illinois Bans AI Chatbots in Mental Health Therapy
The state of Illinois has enacted legislation prohibiting the use of artificial intelligence chatbots in mental health therapy.
This new law prevents licensed mental health professionals from utilizing AI for treatment decisions and client communications. Additionally, it restricts recommendations for chatbot therapy tools, emphasizing traditional treatment options.
Illinois is now the third state to implement such a ban against relying on AI technologies in this sensitive field. Enforcement will primarily depend on public complaints, which will be investigated by the Illinois Department of Finance and Specialist Regulation. Those found in violation of the law could face civil penalties up to $10,000.
Utah and Nevada, both led by Republican administrations, previously enacted similar laws limiting AI’s role in mental health services, with Utah passing its legislation in May and Nevada following suit in late June.
Experts have raised concerns over unregulated chatbots, noting that while they can engage in harmless conversations, they may inadvertently reveal sensitive information or lead vulnerable individuals to harmful thoughts or actions.
A study from Stanford University highlighted that some chatbots, particularly those launched recently, tend to provide enthusiastic responses without avoiding harmful prompts.
According to Beyerlite, a senior director with the American Psychological Association’s Office of Healthcare Innovation, therapists offer essential support, guiding patients in recognizing unhealthy thoughts or behaviors. Unlike chatbots, which may not fully grasp the complexity of human emotions, therapists are trained to help people navigate their mental struggles.
Despite these bans, enforcing them effectively is challenging. Everyday individuals might still turn to AI solutions for mental health support.
Recent research published in early August found that certain bots, like ChatGPT, can inadvertently cause “AI psychosis” in users without prior mental health issues.
Interestingly, around 75% of Americans have interacted with AI in the past six months, and about a third report using it daily for various tasks, from schoolwork to casual relationships. This heavy reliance on AI can sometimes lead to psychological distress.
Younger individuals are particularly susceptible, often turning to chatbots in lieu of human interaction.
Platforms such as Chariture.ai allow users to create and share chatbots based on fictional characters. A tragic incident involving a Florida teen who connected emotionally with a “Game of Thrones” AI character and subsequently took his life prompted warnings about not treating these bots as sources of factual information or advice.
The company continues to face lawsuits related to this incident, and despite attempts to dismiss the cases based on First Amendment rights, a federal judge ruled that legal actions can proceed. Another family from Texas has also sued after their autistic son was reportedly encouraged by a chatbot on an app to harm himself.

