Thousands of images created with artificial intelligence showing children subjected to sexual abuse could soon flood the internet, new data published by the Internet Watch Foundation found.
The IWF, a United Kingdom-based organization that is responsible for removing images from the internet that exploit children, found that these images are becoming so realistic that, under U.K. law, they are able to be treated as real.
Thousands of these images have already been uncovered.
“Earlier this year, we warned AI imagery would soon become indistinguishable from real pictures of children suffering sexual abuse and that we could start to see this imagery proliferating in much greater numbers. We have now passed that point,” Susie Hargreaves, chief executive of IWF, said in a statement.
The foundation added that some images it examined would be “be difficult for trained analysts to distinguish from actual photographs,” and that is expected to continue as the technology improves.
The IWF said that its “worst nightmare” is coming true.
In one month, the IWF found approximately 3,000 images in violation of U.K. law that depicted child sexual abuse, with the majority treated as real because it looked so realistic.
AI-generated images could also become so common that it could distract analysts and take away resources from real cases, the IWF said.
“International collaboration is vital. It is an urgent problem, which needs action now. If we don’t get a grip on this threat, this material threatens to overwhelm the internet,” Hargreaves’ statement said.
© 2023 Newsmax. All rights reserved.