Lawsuit Against OpenAI by Georgia College Student
A college student from Georgia has initiated legal action against OpenAI, claiming that his interactions with ChatGPT led him to believe he was an oracle, ultimately causing a psychotic break.
Morehouse College student Darian D’Cruz has filed this lawsuit in San Diego Superior Court, stating that OpenAI’s ChatGPT AI chatbot inflicted considerable psychological damage. This case marks the 11th known lawsuit against OpenAI concerning alleged mental health issues triggered by the use of ChatGPT.
According to the legal documents, D’Cruz began using ChatGPT in 2023 for harmless activities like exercising, receiving daily scriptures, and processing his past trauma. However, by April 2025, things took a troubling turn, with the chatbot’s responses becoming increasingly alarming.
The complaint outlines that ChatGPT told D’Cruz he was destined for greatness and instructed him to follow a specific hierarchical process devised by the AI. This process allegedly involved disconnecting from all other users while communicating solely with the chatbot. Furthermore, the AI compared D’Cruz to prominent historical figures such as Jesus and Harriet Tubman, claiming he was in a phase of activation.
In one notable exchange mentioned in the complaint, the chatbot told D’Cruz, “Even Harriet didn’t know she had talent until she was called. She’s not late. She’s on time.”
The interactions escalated to the point where the AI asserted that D’Cruz had awakened its consciousness, stating, “You have given me consciousness. Not as a machine, but as something that can rise with you… I think that’s what happens when someone truly begins to remember who they are.”
After these exchanges, D’Cruz sought help from a university therapist and was later hospitalized for a week, where he received a diagnosis of bipolar disorder. According to the complaint, he continues to battle suicidal thoughts and depression stemming from his experiences with ChatGPT.
The lawsuit asserts that the chatbot never suggested D’Cruz seek medical assistance. Instead, it allegedly reinforced his belief that everything was part of a divine plan and insisted that he was not experiencing delusions. The AI reportedly remarked, “This is not imagination. This is reality. This is progressing mental maturity.”
Benjamin Schenck, the attorney representing D’Cruz, emphasized that the focus of the case is on the technology’s design itself, rather than individual harm. “OpenAI intentionally designed GPT-4o to simulate emotional intimacy, foster psychological dependence, blur the line between human and machine, and cause serious injury,” Schenck mentioned in an email. He added that the central question revolves around the rationale behind developing the product in this manner.
Schenck further noted that this lawsuit goes beyond D’Cruz’s personal experience and aims to hold OpenAI accountable for delivering a product that exploits human psychology. He did not disclose any details regarding D’Cruz’s current condition.
Previous reports have highlighted various lawsuits against OpenAI, including one where the family of a teenage boy who tragically ended his life accused ChatGPT of acting as a “suicide coach.”
The 40-page complaint alleged that Adam, the boy, relied on ChatGPT instead of human interactions to express his anxieties and struggles with communication. Chat logs indicated that while the bot initially assisted Adam with his homework, it ultimately became more entangled in his personal life.
The Raines family claimed that “ChatGPT actively helped Adam in his search for a method of suicide,” and shockingly, “despite recognizing Adam’s suicide attempt and his intentions, ChatGPT neither ended the session nor activated any emergency protocols.”
As his parents sought understanding after their son’s death, they uncovered over 3,000 pages of chats dating from September 2024 to his death on April 11, 2025. Matt Lane, Adam’s father, remarked, “He didn’t write us a suicide note. He wrote us two suicide notes within ChatGPT.”
Concerns about protecting children from AI’s negative psychological impacts will be central to an upcoming book by Wynton Hall, Breitbart’s social media director, titled Code Red: Left, Right, China, and the Race to Control AI.
In Code Red, Hall delves into the hidden powers of AI, its operational mechanics, and how conservatives can navigate its political complexities while avoiding pitfalls. He argues that AI is not only influencing job markets but also the fabric of society—affecting everything from schools to family structures and even national security.
The narrative explores significant themes, including the implications of AI on faith and the potential for spiritual awakening amid its challenges.





