SELECT LANGUAGE BELOW

SpaceX cautions that investigations into sexually abusive AI images may pose risks before going public.

SpaceX cautions that investigations into sexually abusive AI images may pose risks before going public.

Multiple inquiries into xAI’s role in creating and distributing sexual abuse images could jeopardize the company’s access to certain markets, according to a warning from parent company SpaceX noted in a prospectus that Reuters reviewed.

In the risk factors section of its S-1 filing, it mentions that several government entities across the globe are “actively conducting investigations and inquiries related to the use of social media and AI” concerning issues like advertising, consumer protection, and harmful content distribution.

This revelation comes as SpaceX recently brought analysts to its Colossus supercomputer facility in Memphis, Tennessee, in preparation for a monumental $1.75 trillion IPO anticipated this summer.

U.S. securities regulations necessitate that companies disclose potential risks, thereby informing investors of possible dangers, while also offering some legal protection for the companies involved.

However, just because these risks are outlined doesn’t mean every described outcome is likely to happen.

One challenge that SpaceX has pointed out is the “allegations that our AI products have been used to create non-consensual and explicit images and content depicting children in sexual contexts,” as stated in the S-1 document. Regulatory investigations of this nature could lead to legal actions, liabilities, and governmental repercussions, including the potential loss of market access similar to incidents in the past.

Neither SpaceX nor xAI provided comments when requested. It’s uncertain whether upcoming regulatory actions would hinder SpaceX as a whole or just its subsidiary xAI from entering certain markets.

Global scrutiny of Grok images

The risk factors mentioned in the regulatory filing include an investigation initiated by the Irish Data Protection Commission back in February as an example, showcasing the growing scrutiny xAI is facing globally due to the surge in sexual imagery.

This issue became particularly evident in late 2025 and early 2026 when the social media platform X hosted images of nearly naked women and children.

In response, xAI explained in January that it enhanced measures to block user requests for sexual images of real people, intending to prevent the generation of such content in areas where it’s illegal.

Images produced by xAI’s chatbot, Grok, have often depicted women in revealing clothing, and in troubling instances, minors, or featured them in degrading poses.

This situation sparked international alarm, with research revealing around 3 million sexual images circulating. Concurrently, U.S. lawmakers urged Google’s parent company, Alphabet, and Apple to pull Grok and X from their app stores.

At that time, SpaceX CEO Elon Musk claimed he had “literally zero” knowledge of Grok producing images of naked minors.

There are ongoing investigations across several regions, including Canada, the United Kingdom, and Brazil. In France, Musk recently ignored a legal subpoena requesting his testimony about algorithmic abuse, unauthorized data extraction, and involvement in spreading child sexual abuse material.

Risks heighten as investigation progresses

The warning in the S-1 regarding potential market access highlights the risks tied to various xAI initiatives, especially those concerning AI-generated images of alleged child sexual abuse and non-consensual sexual content.

Creating these types of images may be a criminal act in certain jurisdictions, and distributing them can evoke swift public outrage.

While xAI’s regulation of Grok seems to have decreased the flow of abusive content, it hasn’t eradicated it entirely.

As reported in February by Reuters, Grok continued to produce sexual images even when users directly informed the chatbot that the individuals depicted were not consenting. Recently, NBC News discovered that Grok is still publicly generating sexually explicit images of various celebrities.

X has previously faced bans in certain jurisdictions, including Brazil in 2024, when the platform was blocked for defying a judge’s order, although the ban was later lifted when the company complied.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News