Surge in AI-Generated Child Sexual Abuse Videos Raises Alarm
The Internet Watch Foundation (IWF) has raised a serious alarm regarding the sharp rise in AI-generated videos depicting child sexual abuse, with many classified as the most severe form of abuse.
Recent findings from the IWF indicate an alarming increase in child sexual abuse material (CSAM), often referred to as child pornography. In fact, the UK-based organization reported a staggering jump in AI-generated videos featuring such content, going from a mere two last year to 1,286 verified videos in the first half of 2025 alone.
The IWF has voiced grave concerns over the advancing sophistication of these AI-generated videos, which are now nearly indistinguishable from actual abuse images. Disturbingly, more than 1,000 verified videos have fallen into the serious Category A abuse classification.
Analysts from the IWF suggest that this trend can be linked to the massive investments flowing into the AI sector, which has made video generation models widely accessible. This easy access, combined with rapid technological advancements, has created a fertile ground for offenders to produce and distribute CSAM.
Furthermore, IWF’s findings reveal that individuals with malicious intent are actively discussing and sharing methods for manipulating AI models in dark web forums. By slightly modifying freely available AI tools with a handful of real CSAM videos, these individuals can create disturbingly realistic videos of abuse.
IWF interim CEO Derek Ray Hill has cautioned about the “incredible risks” these AI-generated materials pose. He fears it could trigger an “absolute explosion” of such content on the clear web and potentially encourage criminal activities related to child trafficking, sexual abuse, and modern slavery.
In response to this escalating threat, the UK government is proposing new legislation aimed at curtailing AI-generated CSAM. If enacted, this law would impose prison sentences of up to five years for individuals found guilty of owning, creating, or distributing AI tools made for generating abusive content. There would also be penalties, potentially up to three years in prison, for possession of guides that instruct how to utilize AI for creating abusive imagery.
In the United States, 38 states have laws against AI-generated child pornography, while 12 states and Washington, DC, have not yet taken any action on this issue.





