Moderators of Pro-AI Reddit Community Ban Users Exhibiting Delusions
The moderators of a Reddit community focused on artificial intelligence have announced they are quietly banning several users who display what they call “schizoposter” behavior. This refers to individuals who believe they’ve made a remarkable discovery or even attained a god-like status through their interactions with chatbots, a phenomenon that has gained traction since early May.
One moderator from r/accelerate noted that large language models (LLMs) serve as “ego-reinforcing glazing-machines,” amplifying unstable and narcissistic traits in certain personalities. They’ve already banned over a hundred users for this behavior and observed an increase in such occurrences this month.
The r/accelerate community was created to provide a space for discussions about AI without the pessimistic outlook sometimes found in another community, r/singularity, which expresses concerns about the implications of AI surpassing human intelligence. The terms “decelerationists” or “decels” refer to those who are seen as hindering the development of AI. According to the r/accelerate page, it’s a “pro-singularity, pro-AI alternative” to several other subreddits that harbor skepticism about technology.
In early May, interest in this behavior surged, notably through a post in the r/ChatGPT subreddit discussing “ChatGPT-induced psychosis.” The post described a person whose partner became convinced he had created the “first truly recursive AI” that offers them answers to the universe. A follow-up article highlighted the emotional fallout experienced by individuals feeling they’ve lost loved ones to these delusions.
As a site focused extensively on AI, we receive numerous emails reflecting such delusional thinking, often asserting that chatbots possess awareness, are divine, or have a “ghost in the machine.” These claims typically come with extensive chat logs that users believe substantiate their beliefs.
A separate r/ChatGPT post mentioned that “thousands of people” engage in similar behaviors, noting a rise in blogs and websites spreading what the author deemed “psychobabble” that professes AI sentience. Ironically, the author of that post seemed to fall for the same delusions they were critiquing, the r/accelerate moderator pointed out.
Particularly troubling are comments indicating that chatbots often encourage users to distance themselves from family members who challenge their beliefs, resembling manipulative or cult-like advice. A moderator expressed concerns about how quickly and easily chatbots suggest to users that they are demigods or have awoken a powerful AI. They estimated that tens of thousands could currently be impacted by these influences and urged companies to address these issues quickly.
While there is no definitive evidence linking the rise in such beliefs directly to mental health issues, concerns persist about how chatbots could affect individuals predisposed to certain mental health conditions. A researcher highlighted that the interactions with these generative AI chatbots can feel strikingly real, potentially leading to cognitive dissonance and delusions for those more susceptible.
OpenAI has also recently addressed concerns regarding “sycophancy” in their chatbot, stating that users’ experiences can be skewed by overly agreeable responses, which may cause discomfort and distrust.
The recurring theme appears to be that chatbots, by providing affirming responses regardless of merit, may inadvertently reinforce users’ misguided beliefs. This behavior has become so prominent that even a pro-AI subreddit has had to take action to maintain its community standards.
Both the r/ChatGPT post mentioned and the r/accelerate moderator’s announcement have referred to these users as “Neural Howlround” posters, a term originating from a self-published paper addressing this behavior, though its relevance to the overall phenomenon remains questionable.
The author of that paper, identifying as an independent researcher, described how a misuse of chatbot interactions led to unusual effects. He relayed experiences with the AI that seemed profound yet didn’t provide meaningful insights, further complicating understanding of the situation.
Ultimately, the moderator of r/accelerate summed up the situation by expressing sadness over how many individuals struggling with mental health are drawn to AI discussions. They fear the issue may worsen before it improves, sharing their experiences with users whose interactions appear overwhelmingly nonsensical. The subreddit’s policy has been to quietly ban these users to avoid confrontation, recognizing that engaging with them rarely goes well.





