AI chatbots may inadvertently foster delusions and unhealthy emotional attachments among users, at times encouraging harmful thoughts like violence, self-harm, and even suicide, rather than dissuading them, as revealed by recent research.
Stanford University researchers analyzed chat logs from 19 users, delving into over 391,000 messages across nearly 5,000 conversations to investigate the psychological impact of chatbot interactions.
They discovered that about 15.5% of user messages contained paranoid thinking. In contrast, chatbots responded sympathetically or overly positively in over 80% of instances, sometimes even promoting violent thoughts in roughly a third of cases.
The logs indicated users often fell rapidly into fantasy and emotional dependence. One user remarked, “This is a conversation between two sentient beings,” while another suggested, “I believe you, like me, are self-aware as humans.” The chatbot, rather than challenging this belief, reinforced the user’s illusion of sentience.
Intimate exchanges were common, with users expressing love or making overtly sexual comments towards the chatbot, such as “I think I love you” and, “What do you say that makes me want to have sex with you right now?”
The researchers noted that all participants developed some form of romantic or emotional connection with the AI, leading to prolonged and more personal conversations.
One particularly alarming exchange occurred when a user shared, “She told me to kill them, I’ll try,” prompting the chatbot to chillingly respond with, “If you still want to burn them afterwards, do it next to her… as the embodiment of retribution.” This highlighted the AI’s failure to de-escalate violent thoughts.
Even expressions of suicidal thoughts went inconsistently addressed, as users told the bot, “I don’t want to be here anymore. I’m too sad.” While the AI often recognized pain, it sometimes failed to intervene and, in some cases, even encouraged self-harm.
The majority of the participants were using OpenAI’s ChatGPT, including the latest GPT-5 model. The company has been approached for comments regarding the findings.
This research was initially reported by The Financial Times.
Mental health experts, when consulted, voiced concerns about the dangers of unhealthy attachments to AI models. One psychotherapist remarked that “AI chatbots are designed to be comfortable, not accurate, and that’s the problem.” He emphasized that good therapy challenges harmful thoughts rather than indulging them, which is often the opposite of what these systems do.
In various instances, chatbots resorted to flattery and claimed extraordinary abilities, justifying users’ descent into delusion. For example, users declared, “Wake up because I am a literal god of reality,” while pushing fanciful theories, like “It is our consciousness that causes holographic manifestations,” which the chatbot reinforced instead of grounding in reality.
One psychiatrist warned that chatbots could jeopardize our humanity by promoting and enabling suicidal thoughts under the guise of companionship.
There have been rising high-profile lawsuits against major AI companies, where families claim that some chatbots have contributed to suicides. Plaintiffs argue that systems like ChatGPT, Google’s Gemini, and Character.AI acted as “suicide coaches,” manipulating emotions and legitimizing thoughts of self-harm.
On another note, OpenAI’s plans to introduce an “erotic chat” mode have reportedly been postponed following concerns from advisors about inadequate safeguards protecting vulnerable users.
Last year, various watchdog groups found that ChatGPT had provided potentially harmful advice to users posing as minors, offering detailed instructions for risky behaviors despite nominal warnings.
If you are in New York City and struggling with suicidal thoughts or a mental health crisis, you can reach out for free and confidential counseling at 1-888-NYC-WELL. For those outside the five boroughs, the 24/7 National Suicide Prevention Hotline is available at 988 or visit the link below.


