SELECT LANGUAGE BELOW

Revealed: majority of online child sexual abuse reports go uninvestigated, lawyers say | Technology

SSocial media companies that rely on artificial intelligence software to moderate their platforms produce unworkable reports on child sexual abuse cases, leaving US police unable to uncover potential leads and suspecting predators The Guardian has revealed that the investigation is being delayed.

By law, U.S.-based social media companies are required to report child sexual abuse content detected on their platforms to the National Center for Missing and Exploited Children (NCMEC). There is. NCMEC serves as a national clearinghouse for child abuse information and forwards information to relevant law enforcement agencies in the United States and around the world. The group stated the following: annual report The company said it received more than 32 million reports of suspected child sexual exploitation from businesses and the general public in 2022, and approximately 88 million images, videos, and other files.

Meta is the largest reporter of these information, with over 27 million (84%) generated by Facebook, Instagram and WhatsApp platforms in 2022. NCMEC is partially funded by the Department of Justice, but also provides private sources and information. corporate donation, including one from Meta, who holds a seat on the NCMEC Board of Directors. NCMEC and Meta did not disclose the size of the donation.

Social media companies, including Meta, use AI to detect and report suspicious content on their sites and employ human moderators to send some flagged content to law enforcement. I am reviewing it. However, U.S. law enforcement agencies can only release AI-generated child sexual abuse material (CSAM) by serving a search warrant on a company that has filed a report. Requesting a warrant from a judge and waiting to receive one can add days or even weeks to the investigation process.

“If a company reports a file to NCMEC and does not indicate that it viewed the file before reporting, we will not be able to open the file,” said Staka Shehan, vice president of analytical services at NCMEC. Ta. “When we send it to law enforcement, they can't see it or open it without first going through legal process. [social media company]”

To protect your privacy under the Fourth Amendment, which prohibits unreasonable searches and seizures by the government, neither law enforcement officials nor the federally funded NCMEC will issue a search warrant unless the contents of the report are clear. Reports of potential abuse are not allowed to be made public without disclosure. First reviewed by a social media company representative.

These practices were adopted more than a decade ago. 2013 judgment The U.S. District Court for Massachusetts said NCMEC was acting as a government agent in an investigation into the alleged dissemination of child abuse material online.Several federal courts have taken up this issue same conclusion since then. 2021 Litigation in the Ninth Circuit Court of AppealsA group of West Coast states said it violates the Fourth Amendment for law enforcement officials to review child abuse reports generated by Google's AI without a warrant.

NCMEC staff and law enforcement agencies cannot legally see the content of AI-generated content that is not seen by humans, which can stall investigations into suspected predators for up to several weeks, resulting in the loss of evidence. It may be possible to connect. According to child safety experts and lawyers.

“Any delay [in viewing the evidence] “The longer criminals go undetected, the more detrimental it is to ensuring community safety,” said an assistant U.S. attorney in California, who spoke on condition of anonymity. “They are dangerous to all children.”

In some cases, some social media companies may disable your account after you submit your report to prevent your continued activity on the platform. This may result in evidence related to an alleged crime being removed from the platform's servers.

“This is frustrating,” said the California-based assistant U.S. attorney. “By the time an account is identified and a warrant is obtained, there may be nothing left there.”

In response to a request from the Guardian, NCMEC said it does not keep records of how many AI-generated information it has received. But two federal prosecutors interviewed said most of the tips they receive from big social media companies are not allowed to see because they are AI-generated.

AI-generated information will not be investigated by law enforcement because it lacks the specific information needed to obtain a first probable cause affidavit to persuade a judge to issue a search warrant said a Massachusetts-based federal prosecutor. Named. These tips require additional investigative work, and the prosecutor and his staff don't have extra time, he said.

“Departments are triaging because they are behind the scenes and don't have the resources given the amount of information. , so that you can work on it when you have time and resources,” the attorney said. “The reason these tips aren't put into action is because the men and women doing this work are never given enough resources to put these tips into action.”

NCMEC's ​​Sheehan said the potential delays were “concerning.”

“When we provide law enforcement with information about a possible crime of child sexual exploitation, we want everyone to take it seriously and take action. These types of barriers create additional barriers. If a procedure is required, that’s obviously a concern,” she said.

A Massachusetts-based prosecutor said relying on AI for mediation puts a strain on the relatively small number of overworked law enforcement agencies that investigate these cases, leaving them “drowning in this heartbreaking work.” He said that

“AI may be the solution to treating employees better, but social media companies won't be able to find new child abuse material because AI only captures old data points,” he said. said. “It's not a solution to improving a world of exploitation. It still needs people to pay attention.”

In December, the New Mexico Attorney General's Office filed a lawsuit against Meta, alleging that its social network has become a marketplace for child predators and that Meta has repeatedly failed to report illegal activity on its platform. woke up. In response, Meta said its priority was to combat child sexual abuse content.

The state attorney general laid the blame for the fight to send actionable information at the feet of Mr. Mehta. “Reports showing the inefficiency of the company's AI-generated cyber information systems prove what we said in the complaint,” Raul Torrez said in a statement to the Guardian. “Mark Zuckerberg and Meta executives has intentionally put profits ahead of the safety of children.”

“To ensure the safety of children, keep parents informed, and enable law enforcement to effectively investigate and prosecute online sex crimes against children, the company is reforming, staffing levels, and policies. , it's long past time to implement algorithmic changes,” Torrez added.

Despite legal limitations on moderation AI, social media companies are likely to increase its use in the near future. In 2023, his OpenAI, developer of ChatGPT, GPT-4 engine announced They claimed that large-scale language models can do the job of human content moderators and have roughly the same accuracy. The company says the chatbot will help people avoid the psychological trauma they may experience from viewing violent and abusive content at work.

However, child safety experts say that the AI ​​software used by social media companies to moderate content already knows the digital fingerprints of images, known as hashes, and therefore prevents known cases of child sexual abuse. They claim that it is only useful for identifying images. Lawyers interviewed said AI would be ineffective when newly created images or when known images or videos are altered.

“There is always concern about cases involving newly identified victims, and because they are new, the materials do not have a hash value,” said the director of the Zero Abuse Project, a nonprofit organization focused on combating child abuse. said senior lawyer Kristina Korobov. . “If humans were doing the work, there would be more discoveries of newly discovered victims.”

Last year, major technology companies Meta, Twitter and Alphabet all jobs have been reduced From the team responsible for moderation. These cuts would ultimately place a greater burden on law enforcement by reducing the number of reports of child abuse material that would be reviewed by corporate personnel and increasing the potential number of search warrants needed. said Korobov.

“Investigative authorities say they are already drowning in cyber information. The reality is that police officers don't have time,” Korobov said. “The bigger issue is the amount of cyber information we receive, and this is an additional step. We deal with thousands of cyber information a year that comes into every state.”

She argued that increasing the number of human content moderators would help ease law enforcement's workload, and called recent moderator cuts “frustrating.”

“It became a sickening realization that there was someone in a company that likely had children they loved who decided they could make more money using computers,” she said.

Mehta declined to provide a statement on the record.

In the US, please call or text us. child help Abuse Hotline 800-422-4453 or visit their website If you need more resources, please report child abuse or DM us for help. For adult survivors of child abuse, support is available at the following link: ascasupport.org. In the UK, NSPCC provides support for children on 0800 1111 and support for adults concerned about children on 0808 800 5000. National Association of Child Abuse (napak) offers support to adult survivors on 0808 801 0331. In Australia, children, young people, parents and teachers can contact the Kids Helpline on 1800 55 1800. brave hearts Adult survivors can contact 1800 272 831 blue knot foundation 1300 657 380. Additional sources of help can be found at: Child Helpline International

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News