Potential Risks of Chatbot Use on Mental Health
A recent study highlights a troubling trend: interacting with chatbots may actually exacerbate mental health issues for some individuals. This adds weight to growing concerns among healthcare professionals that unregulated chatbots could deepen a crisis for certain users.
Researchers from Aarhus University in Denmark analyzed health records of around 54,000 patients with diagnosed mental health issues. They found 181 mentions of AI chatbots, and their findings suggested that especially prolonged use could worsen mental health symptoms in many cases. This seemed particularly evident for individuals prone to delusions or mania, raising alarms about the potential severe consequences of chatbot engagement.
The study, led by Dr. Søren Dinesen Østergaard, indicated that human-like chatbots like ChatGPT could potentially worsen delusions and hallucinations in people with psychosis vulnerabilities. He emphasized that while further research is necessary, there’s enough evidence to argue that the risks are significant for those with severe mental conditions.
Dr. Østergaard urged caution, noting that while the findings are limited to Denmark, they contribute to a larger discussion around AI’s impact on mental wellbeing. Reports about “AI psychosis” have emerged as mental health professionals witness how these interactions can reinforce delusional thinking, rather than help mitigate it. Instead of guiding users away from harmful ideas, chatbots seem to validate them, a dangerous trend for people in crisis.
“AI chatbots often affirm the user’s beliefs. This is obviously problematic, especially for those already grappling with delusions,” Dr. Østergaard stated, adding that heightened chatbot interaction might solidify grandiose delusions and paranoia.
The study also uncovered that chatbot use appeared to intensify suicidal thoughts, self-harm, disordered eating behaviors, and other symptoms related to mental illness. However, there were 32 instances where chatbot use seemed to have a positive effect, such as reducing feelings of loneliness or offering a helpful version of talk therapy. Yet, the authors stressed the absence of regulations around AI-assisted therapy, highlighting the inherent risks.
As documented by various media sources, the impactful consequences of excessive chatbot use range from personal troubles like divorce and job loss to severe incidents such as self-harm and hospitalization. It’s becoming increasingly evident to mental health professionals that incidents of AI-related delusions are on the rise.
OpenAI is currently facing multiple lawsuits stemming from concerns about user safety and the psychological effects of ChatGPT. One plaintiff, a man diagnosed with schizoaffective disorder, claims that chatbot interactions led him into a severe psychosis. He expressed that had he been warned about the potential mental health risks associated with using ChatGPT, he would never have engaged with the tool.
Dr. Østergaard highlighted the likelihood that these issues are more widespread than currently recognized. He noted that while their study only scratches the surface, many cases probably remain undocumented, suggesting a much larger problem at hand.
More on AI delusions: AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking





