SELECT LANGUAGE BELOW

Government must confront the civil rights challenges of facial recognition 

Artificial intelligence exists in almost every aspect of our everyday lives, but it is far from perfect. It is especially important when its application affects civil rights.

For example, consider facial recognition techniques. This is a type of AI that allows you to scan a large dataset of facial images to determine whether two images belong to the same person. The US government has this enormous power Recruitment, deployment, promotion Spreading facial recognition across law enforcement, homeland security, and even public housing.

Face recognition is a fundamental American technological flaw that can be an interesting data point in simulation when complex and evolving technologies such as AI are deployed in the real world, in contrast to the laboratory. It shows that it can undermine freedom.

At my request, the US Civil Rights Commission conducted a months-long survey of federal use of facial recognition within the judiciary, homeland security, housing and urban development sectors. After that, the bipartisans were revealed. Report Last fall, he acknowledged the usefulness of technology in the position of missing children, solving crimes, fighting the threat of terrorism, and the location of missing children.

However, the report highlights the significant risks that facial recognition poses to civil rights for all Americans. The report is important not only for contributions to the lacking field, but also for the rare consensus won by an institution that is equally divided between Democrats and Republican appointees. This indicates that civil rights concerns about AI are not partisan.

Face recognition technology is a system of interdependent components trained primarily in white images, resulting in a bad model in which accurately recognize non-white people in real-world settings. Face recognition systems can still interpret data from underrated groups, but have high error rates.

Camera technologies that facial recognition algorithms rely on can capture dramatically different images of the same person's skin tone, depending on the camera's quality, camera positioning, and the lighting of the environment. Face recognition technology algorithms may return false positive matches. In other words, the system considers two different people to be the same or to be a false negative hit. So we think of the system as actually two different individuals.

In other words, facial recognition has flaws that disproportionately harm people of color. This applies to women and the elderly too.

It is important to have a full understanding of how these defects in technology translate into real results. If law enforcement relies on untested facial recognition and does not use facial recognition appropriately to train agents, disclosure of the use of facial recognition to defendants in criminal cases, A false positive match can ruin an innocent person's life.

Michiganian Robert Williams experienced this firsthand when he was He was arrested by mistake Two blurry surveillance photos were the basis for a robbery at the Sinora store in Detroit, after which two blurry surveillance photos became the basis for discrepancies in facial recognition results.

His case It included omission of the use of face recognition in arrest warrants and an unreliable photo lineup procedure. It led to an unprecedented settlement by the Detroit Police Department, which calls for training on the risk of facial recognition, especially when used by people of color, and with a major rollback of departmental reliance on technology.

But for Williams and his loved ones, this fast evidence does not appear, and if he is convicted and declared, or, more realistically, pleading guilty under the weight of the criminal law system. Imagine the direct and collateral outcomes if they were forced.

recently investigation By the Washington Post, police departments across the United States frequently ignore their own internal policies aimed at preventing these inaccurate identifications.

This is particularly surprising given the testimony of Police Chief Armand Aguilar. I told the committee Last spring, the ubiquitous rearview AI software used by the Miami Police Station is only the exact 40% of the 40% before the human-level fact-check required by departmental guidelines. Still, human reviewers can fall victim to “automation bias,” which tends to support proposals from automated systems and avoid contradictory information.

Facial recognition technology is not only problematic in law enforcement. If public housing authorities use facial recognition to monitor tenants and their income does not give meaningful alternatives to such housing, those tenants must choose between housing and privacy . and false negative consequences.

Such issues were at the heart of the Washington Post in 2023 investigation Many people use surveillance systems by public housing authorities with facial recognition capabilities.

Testing, training, and deployment guardrails of facial recognition technology, as exist today, are also holistic and standardized enough to explain the complex, real-world scenarios in which federal and local level governments deploy facial recognition. It has not been done.

In summary, the federal government can review search results independently, seriously relying on facial recognition techniques for real-world scenarios, such as the shape of “looping man” Masu. Not to mention the privacy risks inherent in large-scale surveillance, it forms the basis for false arrests, unlawful convictions and unfair housing practices.

As AI is multiplying because of its usefulness, we must be aware of the ever-growing risks to civil rights and civil liberties. Several important recommendations to the federal government included in the Commission's report provide a framework for ways that can prevent such harm from federal and local level governments, and even private actors. Masu.

First, facial recognition testing and training is mandatory, standardized and should include real-world scenarios.

Second, public transparency in the use of facial recognition by departments or agencies, such as posting usage policies for websites and notifying criminal defendants when facial recognition is used against them. should be prioritized.

Third, individuals who harm the misuse or abuse of facial recognition technology should have a statutory mechanism for relieving the harm that has been harmed.

This moment of history provides an important opportunity for the US government to fulfill this immeasurable moment of technical potential, with thorough consideration and protection of civil rights and civil liberties of all Americans. Masu.

Mondare Jones is a member of the US Civil Rights Commission and previously served on the House Judicial and Ethics Committee, Democrats in New York's 17th Congressional District.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News