SELECT LANGUAGE BELOW

Most AI-Produced Deepfake Porn in Schools Affects Children Aged 14 and Below

Most AI-Produced Deepfake Porn in Schools Affects Children Aged 14 and Below

Emerging Concerns Over AI-Generated Explicit Images in Schools

The rise of AI-driven nudify apps and deepfake technology has triggered a worrying trend among students, with reports indicating that a significant portion of sexually explicit images created and distributed involve children, particularly those under 14, with some as young as 11, according to a recent survey of UK educators.

As AI technology rapidly advances, a new form of digital exploitation is emerging—deepfake pornography. This unsettling trend is infiltrating schools, where students increasingly utilize “nudity” apps to fabricate sexualized images of their peers, educators, and even themselves. The consequences of this behavior can be catastrophic, leading to extensive psychological trauma and distress for those targeted.

A recent poll, commissioned by Mr. Tap and reported on by the guardian, highlighted that about 10% of secondary school teachers in England have noted the creation of “deepfake sexually explicit videos” by students within the last academic year. Alarmingly, around 75% of these cases involved kids under 14, with reports of students as young as 11 being affected. The easy availability of these AI tools has simplified the process for children to engage in such harmful actions.

The repercussions for victims can be severe. Young girls and women affected by deepfake pornography frequently express feelings of violation and humiliation. Friendships can suffer, and victims may struggle to interact with classmates. In extreme situations, the emotional toll has led some students to feel physically ill upon realizing explicit images of themselves are circulating.

It’s important to note that while much of this abuse targets girls, boys are not exempt. Everyone’s Invited (EI) has documented instances where boys have been subjected to similar actions, with AI-generated explicit images created and disseminated within school environments, resulting in notable distress.

Additionally, teachers aren’t spared from deepfake porn. Many educators strive to support their students, though they often lack proper training to navigate these situations. The absence of clear protocols and inconsistencies in addressing these offenses have left numerous teachers feeling confused and overwhelmed about how best to respond.

In the United States, this issue has been causing concern for some time. In 2024, reports surfaced about several students being expelled from a Beverly Hills middle school for distributing deepfake pornography involving their classmates.

This unsettling event was revealed in February when an explicit image featuring the faces of 16 eighth-graders arranged on digitally altered nude bodies circulated via a messaging app. Although the identity of the victim remains undisclosed, the incident has shaken the community and raised serious alarms about technological misuse.

Such AI-generated imagery, referred to as “deepfakes,” is alarmingly convincing, especially to those who are untrained in identifying them. Prior reports highlighted the skyrocketing popularity of apps that produce deepfake porn.

Superintendent Michael Breggie indicated that the five students expelled were found to be “most egregiously involved” in the creation and sharing of these explicit images. Details of the expulsion remain confidential, though it outlines the duration and conditions for the students’ return to school.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News