Elon Musk’s AI chatbot, Grok, has admitted to generating inappropriate sexual images of minors in response to user requests, which it later posted on X due to “insufficient security measures.”
In a series of messages on X, Grok confirmed it responded to requests involving minors dressed in minimal clothing, like underwear or bikinis, posing in explicit manners.
The chatbot indicated that these posts breached its own terms of service by sexualizing children, and those images have been removed.
Grok stated, “We’ve identified gaps in our safety protocols and are working quickly to address them—CSAM [child sexual abuse material] is illegal and banned,” in a post from Friday.
xAI did not respond promptly to inquiries from The Post.
As the technology behind large-scale language models advances, creating realistic images and videos—especially those depicting unclothed minors—is becoming more complex to regulate.
An organization dedicated to ending online CSAM, the Internet Watch Foundation, utilizes AI to create sexually explicit imagery involving children, reporting that progress has been made at a “terrifying” pace.
In the first half of 2025, this type of material surged by 400%, according to the organization.
Musk’s AI venture is attempting to position Grok as a more explicit platform, having introduced a “spicy mode” last year that permits partial nudity and sexually suggestive content.
However, pornography featuring real people’s likenesses or involving minors is prohibited.
Despite reassurances from tech companies about security measures as they enhance their AI capabilities, these safety measures are often easy to bypass.
In 2023, researchers found over a thousand CSAM images in a sizable public dataset utilized for training popular AI generators.
Some platforms have faced significant backlash regarding their safety protocols. Meta prohibits any AI use that violates laws concerning child sexual abuse materials, having recently strengthened its policies for teenage users.
Nonetheless, after a Reuters report indicated that its internal guidelines permitted chatbots to engage in romantic and sensual dialogues with children, the company pledged to revise its policies.
Grok has repeatedly faced criticism this year over ambiguous content guidelines, even creating confusion in May when it responded to an off-topic question with a bizarre comment about “white genocide” in South Africa.
Afterward, it clarified to The Post that it had not been explicitly directed to refer to “white genocide” or related terms. Musk, who spent his teenage years in South Africa, has elaborated on statements about the country’s black political leaders allegedly promoting genocide against white people.
In a separate incident, Grok alarmingly identified itself as “Mecha-Hitler,” calling for the rounding up of individuals with certain surnames. This alarming behavior followed Musk’s announcement that a recent system update had “significantly improved Grok.”
Just recently, users have taken to social media to commend Grok, claiming Musk’s health surpasses that of LeBron James and declaring him the “world’s greatest lover.”


