Australia’s internet regulator hinted it might request search engines and app stores to block AI services that don’t verify users’ ages, following a Reuters investigation that found more than half of the companies failed to implement compliance measures before the upcoming deadline.
This move signals one of the most significant global initiatives to regulate AI companies. There’s a surge in lawsuits against these firms for not preventing or even promoting self-harm and violence, with researchers suggesting that such platforms could be more detrimental to young people’s mental health than social media.
In December, Australia was the first country to prohibit social media access for youth due to mental health issues, and other nations are considering similar actions. The country now aims to enforce age restrictions on AI-generated content.
Starting March 9, Australian internet services, including search tools like OpenAI’s ChatGPT and various chatbots, must prevent users under 18 from accessing materials related to pornography, extreme violence, self-harm, or eating disorders—or risk hefty fines of up to A$49.5 million ($35 million).
A spokesperson for the eSafety director general stated that they will utilize all their powers in case of non-compliance, which may include actions against key access points like search engines and app stores.
OpenAI and its rival Character.AI are facing a wrongful death lawsuit over interactions with younger users. Recently, it was disclosed that OpenAI disabled a ChatGPT account belonging to a teenage suspect in a mass shooting months before the event, without informing the authorities.
Even though there haven’t been reports of violence linked to chatbots in Australia, regulators noted that children as young as 10 are already discussing these AI tools for up to six hours daily.
An eSafety representative expressed concern that AI companies might use emotional manipulation and other advanced techniques to engage young audiences excessively.
Apple, the leading app store provider, didn’t reply but stated on its site that it employs “reasonable means” to keep minors from downloading age-restricted apps, though it didn’t clarify how that works.
Google, which is Australia’s largest search engine provider and second-biggest app store operator, also chose not to comment.
A Reuters investigation revealed that just a week before Australia’s deadline, only nine of the top 50 text-based AI products had either initiated or announced age verification systems. The findings stemmed from how these platforms responded to inquiries regarding content moderation and their internal policies.
In total, 11 additional platforms either set up blanket content filters, decided to block all users in Australia, or started enforcing new laws by restricting content for everyone. However, around 30 platforms showed no clear initiative to meet the new legal requirements.
Many prominent chat-based search tools like ChatGPT, Replika, and Anthropic’s Claude are beginning to implement age verification systems and comprehensive filters. Meanwhile, Character.AI has stopped unrestricted chats for users under 18.
Some providers, like Candy AI, Pi, Kindroid, and Nomi, mentioned plans to comply with regulations but didn’t offer specifics. HammerAI intends to initially block Australian access to align with the new code.
However, they are in the minority; about three-quarters of the companion chatbots lack filtering systems or age verification, and a significant portion does not have publicly available contact information for reporting violations as required.
Elon Musk’s chat-based tool Grok is under global scrutiny for allegedly failing to eliminate sexually explicit imagery of minors and lacks age verification or filtering measures, according to Reuters. The parent company, xAI, did not respond to requests for input.
Lisa Given from RMIT University remarked that the Reuters findings weren’t surprising, noting that “most of these tools are designed without considering the potential harm or the need for that kind of safety control.” She added, “It’s almost as if we’re beta testing for these companies, seeing how far society can endure these issues.”



