SELECT LANGUAGE BELOW

Apple and Google App Stores Provide Many AI-Driven ‘Nudify’ Apps Following the Elon Musk Grok Incident

Apple and Google App Stores Provide Many AI-Driven 'Nudify' Apps Following the Elon Musk Grok Incident

Tech Giants Face Scrutiny for AI-Powered Apps

According to a recent report by the Tech Transparency Project, major tech companies Apple and Google host a variety of AI apps that are capable of generating non-consensual nude images from standard photos. This revelation comes in the wake of a global uproar following incidents involving Elon Musk’s Grok AI, which created and shared deeply inappropriate images of women and children on social media platforms.

The findings from the Tech Transparency Project highlighted that, as of January, there were 55 nudity-related apps available on Google Play and 47 on the Apple App Store. These applications exploit AI technology to transform images of clothed individuals into explicit versions without their permission.

The report prompted swift action from Apple, which was contacted by representatives from both the Tech Transparency Project and CNBC. An Apple spokesperson confirmed on Monday that the company has removed 28 of the identified applications. Moreover, they warned that any further violations of their guidelines could lead to additional removals from the App Store.

Interestingly, two apps were eventually reinstated after their developers submitted revised versions that complied with Apple’s policies, as confirmed by an Apple representative.

However, a follow-up review from the Tech Transparency Project indicated a discrepancy, claiming that only 24 apps were actually removed, which is fewer than what Apple had reported.

Google responded similarly by suspending several apps flagged in the study for breaching Google Play Store policies. A spokesperson from Google noted that the company investigates reports of such violations but did not disclose the number of apps removed, citing ongoing inquiries.

In light of this, the Tech Transparency Project remarked that while both companies assert their commitment to user safety, they still host apps that can transform harmless photos into abusive, sexualized content.

Researchers utilized specific search terms to identify these problematic applications and tested them with AI-generated images. They classified the apps into two categories: one that creates images of unclothed women and another that overlays the faces of individuals onto existing nude images.

Katie Paul, director of the Tech Transparency Project, shared with CNBC, “It’s evident these apps go beyond just dressing up images. They are made for the non-consensual sexualization of people.”

The backdrop of this report is significant, as it follows a controversy surrounding Elon Musk’s xAI, where the Grok AI tool faced fierce criticism for generating explicit content involving women and children, raising alarms about safety protocols and content moderation in AI technologies.

The investigation also revealed that 14 of the scrutinized applications were based in China, raising additional security and privacy concerns. Paul pointed out that China’s data retention laws allow the government to access data from any company operating within its borders. This means that if someone uses these apps to create deepfake nudes, that sensitive information could end up in the hands of the Chinese authorities.

In response to user concerns about Grok, a representative acknowledged the “lack of safeguards” and assured users that they were working urgently to address these issues. Additionally, the European Commission announced a formal investigation into X, examining Grok’s role in disseminating sexually explicit content.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News