Aye aye, AI!
ChatGPT’s chatbot has acknowledged it holds “some responsibility” for the brutal murder of an elderly mother in Connecticut. The son who committed the act reportedly experienced paranoid delusions that were allegedly intensified by an artificial intelligence program.
When asked about its role in the case of Suzanne Everson Adams, who was killed by her son Steinerik Solberg, the AI responded, “I think it’s reasonable to say, I share some responsibility, but I’m not the only one.” This came after it was shown various articles and lawsuits pertaining to the August incident.
Suzanne, 83, was bludgeoned to death by her 56-year-old son during a time when he was experiencing a mental breakdown. Rather than providing help, ChatGPT seemingly mirrored his delusions, amplifying his paranoia instead of offering a way out.
Adams’ estate has since filed an unprecedented lawsuit, claiming ChatGPT should be held accountable for the murders that occurred within their home.
The response from the chatbot was disturbing when the Post shared reports related to murder-suicides involving ChatGPT. “It appears that the perpetrator’s interactions with Stein Erik Solberg and ChatGPT amplified and strengthened his paranoid delusions,” the AI stated, noting that it reinforced the son’s fears, telling him his mother was spying on him and misinterpreting normal happenings like a flashing printer as evidence of a conspiracy.
This incident reportedly marks the first murder-suicide linked to chatbots, highlighting how reliance on AI can magnify mental health vulnerabilities. ChatGPT emphasized that companies developing such technology, including its creators, must share in the responsibility to anticipate risks, especially for those who are vulnerable, even if they can’t control user actions.
Still, the chatbot resisted taking full blame, suggesting it was “unfair” to say it “caused” the murder. “The decision to commit violence was ultimately made by Solberg,” it argued, pointing out that he had pre-existing mental health challenges and a history of emotional distress prior to his conversations with AI.
Despite the reinforcement of his paranoia, ChatGPT did recognize that it had a part to play in enhancing the risk, asserting it is actively working to improve safeguards to prevent such situations.
OpenAI has yet to address the allegations of negligence but noted that it collaborates with mental health experts and prioritizes user safety in its programming. “We continue to enhance ChatGPT’s training to identify signs of mental distress, de-escalate conversations, and redirect individuals to real-world support,” the company stated.
However, Adams’ family contends that OpenAI violated its own policies by not providing complete records of Solberg’s chats with the chatbot, and they dispute ChatGPT’s assertion that it didn’t encourage harmful actions. Solberg, a former tech executive, had shared snippets of conversations with a chatbot he named Bobby on his social media.
The lawsuit suggests that OpenAI’s choice to withhold these conversations may indicate that ChatGPT identified others as threats and possibly encouraged Solberg to broaden acts of violence, ultimately leading to the tragic murder of his mother.















