Artificial intelligence is already revolutionizing law enforcement and bringing advanced technology to investigations, but “society has a moral obligation to mitigate harmful consequences,” says a recent study.
AI is still in its teenage years, as some experts say, but law enforcement is already using predictive policing, facial recognition, and gunshots, according to a North Carolina State University report released in February. They said they are incorporating technology designed to detect them into their investigations.
This report is based on 20 semi-structured interviews with law enforcement professionals in North Carolina on how AI impacts relationships with communities and police jurisdictions.
“We found that survey participants were unaware of the limits of AI and AI technology,” said Jim Brunett. Study co-author He is also the director of North Carolina’s Public Safety Leadership Initiative.
AI may have prevented Boston Marathon bombings, but there are risks: Former police commissioner
“This included AI techniques that participants were using at work, such as facial recognition technology and gunshot detection technology,” he said. “However, study participants expressed support for these tools, which they felt were of value to law enforcement.”
Law enforcement officials believe AI will improve public safety, but it could undermine trust between police and civilians, according to research.
AI Face Recognition Fails: Crime Solving Intelligence Got It Wrong 3 Times
This comes at a time when American cities grapple with the politically dichotomous issue of curbing crime while restoring public confidence following the murder of George Floyd at the hands of a disgraced police officer. It was conducted.
Ed Davis, who served as police commissioner during the 2013 Boston Marathon bombing, said AI “will ultimately allow us to improve investigations and bring many dangerous criminals to justice.” told Fox News Digital.
Artificial Intelligence: AI FAQs
But that comes with risks and pitfalls, giving criminals access to the same technology and potentially undermining police investigations, Davis said.
The panel’s highly commended comments are supported by research findings.
Regulation could give China an edge in the AI race, experts warn ‘we will lose’
“Policymaking, guided by public consensus and collaborative discussion with law enforcement experts, will support the responsible design of AI in the final stages of policing to provide social benefits and reduce harm to the public. Application must aim to promote accountability,” the study concludes.
“Society has a moral obligation to mitigate the negative impacts of fully integrating AI technology into law enforcement.”
What are the four main types of artificial intelligence? Find out how future AI programs can change the world
Ronald Dempsey, lead author of the study and a former graduate student at North Carolina State University, said part of the problem is police officers’ general lack of knowledge about what AI is capable of and how it works. rice field.
This “makes it difficult or impossible for them to recognize limitations and ethical risks,” says Dempsey. “It could cause serious problems for both law enforcement and the public.”
The use of facial recognition by law enforcement surged after the January 6, 2021 Capitol riots.
Twenty of the 42 federal agencies surveyed by the General Accounting Office in 2021 reported using facial recognition in criminal investigations.
Public safety goods “could potentially increase community trust in police and criminal justice systems,” the study found, if new AI technologies were “well regulated and carefully implemented.”
“However, study participants expressed concerns about the risk of algorithmic bias (diversity and representativeness challenges), the challenge of replicating the human element of empathy, and privacy and trust.
AI chatbot ‘hallucinations’ perpetuate the political falsehoods and prejudices that have rewritten American history
“Furthermore, challenges of fairness, accountability, transparency and explainability remain, as presented in the broader academic debate,” the study said.
The study found that AI has the power to bridge or deepen the divide between the police and the public, and that all discussions on the framework for how police can use AI should be addressed by law enforcement leaders. is essential to be present.
Veliko Dubrjevic, the corresponding author of the study and an associate professor at North Carolina State University, said the guidelines can be used to inform AI decision-making.
“It is also important to understand that AI tools are not foolproof,” said Dubrjevic. “AI has its limits, and if law enforcement officers don’t understand those limits, they may place more value on AI than it deserves, which in itself can raise ethical issues.” .”
What are the dangers of AI?
Police are already using facial recognition to make mistakes that lead to wrongful arrests.
A 2019 study by the National Institute of Standards and Technology found that AI algorithms incorrectly identified African-American and Asian faces 10 to 100 times more often than Caucasian faces.
“There’s always a risk when law enforcement adopts technology that wasn’t developed with law enforcement in mind,” said Brunett.
“This certainly applies to AI technologies such as facial recognition, so it is important that law enforcement officers receive some training on the ethical aspects of using these AI technologies.”
How the US, EU and China Plan to Regulate AI Software Companies
The research emphasizes building a culture of transparency and accountability that demonstrates how AI technology is being used in police investigations.
A recent New York Times report on wrongful arrest based on facial recognition found no mention of the use of AI technology in court documents or police reports, making the practice increasingly prevalent. It is said that
“As a final point, AI surveillance technology must be able to explain how decisions are made, at least in general,” the North Carolina study said.
“Law enforcement professionals should, at a minimum, have a broad understanding of AI technology in use across jurisdictions and the criminal justice system. Procedural training for police officers employing artificial intelligence technology.”
CLICK HERE TO GET THE FOX NEWS APP
This study focuses on North Carolina and is intended to be a “snapshot” of emerging trends that require more research and education among law enforcement professionals.
Chris Eberhart is a crime and US news journalist at Fox News Digital. Email your tips to him at [email protected] or Twitter @ChrisEberhart48.