Last month at the White House New rules published Establish how the federal government will use artificial intelligence systems, including basic safety and civil rights protections.
Given the well-documented potential of A.I. discrimination and super charge surveillancethis rule is urgently needed as federal agencies race to deploy this technology, among other harms.
The good news is that in most cases the new rules are memo The Office of Management and Budget’s policy is clear, wise, and strong. Unfortunately, they give agencies too much discretion to opt out of key safeguards, severely undermining their effectiveness.
Before federal agencies move further down the AI path, the Biden administration needs to make changes to ensure opt-outs are the exception rather than the rule.
But let’s start with the good. We have a lot. OMB memo Establishes a wide range of “minimum risk mitigation practices” that federal agencies must implement.
Before using AI that could impact people’s rights or safety, government agencies should conduct research such as: Impact evaluationpay particular attention to “potential risks”. Underserved communities“Facial recognition errors and the wrongful arrest of racial minorities based on facial recognition errors, etc.” benefits denied The flawed algorithm impacts low-income households.
They also need to assess whether AI is a “better” fit than other means to achieve the same goals. This is an important threshold given that AI systems are not suited to the task. often harmful. They must then test real-world performance and mitigate emerging risks through continuous monitoring.
If a government agency fails to implement these practices, or if testing shows that AI is unsafe or violates people’s rights, the agency will be prohibited from using the technology. All of this underscores a key tenet of the OMB memo: AI is off the table if the government cannot guarantee that it will protect people from harm caused by algorithms.
But given how robust these new rules are, it’s even more alarming that OMB is giving agencies wide latitude to circumvent them.
One loophole allows an agency, and the agency alone, to determine that compliance “increases the risk to the overall security or rights” or “creates an unacceptable impediment to the agency’s critical operations.” The law allows for exemptions from minimum practices. Such vague standards are easy to abuse. Additionally, understanding practices to reduce risk is difficult. increase they.
Agencies will also have the leeway to opt out if they decide that AI is not the “primary basis” for a particular decision or action.There is a similar loophole below new york city law AI’s effectiveness in combating bias in recruitment is being undermined. The law requires employers to audit their use of AI-powered recruitment tools for racial and gender bias and post the results, but these tools are not working properly. Only in case.Substantially assisting or replacinghuman decision making. fewer employers We have published the audit as a result.
You don’t have to look far to see the consequences of such broad regulatory exemptions. Government agencies are already integrating AI into a variety of functions with few safeguards. The results were not encouraging.
Apps used by customs and border security For example, facial recognition capabilities are used to screen immigrants and asylum seekers. Accuracy is low Helps identify people with dark skin.this has unfairly blocked Black asylum seekers have their asylum applications rejected.
In 2021, the Department of Justice I found that. Algorithms used to evaluate who is granted early release from federal prison disproportionately predict that Black, Asian, and Hispanic people will reoffend and less likely to qualify.
AI has also infiltrated programs jointly managed by the federal government and states. Medicaid benefits We provide home care support for the elderly and people with disabilities. More than 20 states use algorithms related to arbitrary and unreasonable reductions in home health hours. thousands of of Beneficiary Some have been inappropriately denied medical treatment, forced to skip medical appointments, forego meals and sit in urine-soaked clothes.
Worse, the decision to opt out of OMB’s minimum practices would be left solely to the discretion of “.chief artificial intelligence officer” — the responsible person designated by the government agency responsible for overseeing the use of AI. These officials must report these decisions to her OMB and explain them to the public in most circumstances, unless the decision includes, for example: Confidential information. However, these decisions are final and not subject to appeal.
And longstanding weaknesses in the way government agencies monitor themselves could undermine the important oversight role of chief AI officers. For example, Department of Homeland Security Privacy and civil rights watchdogs are chronically understaffed and isolated from operational decision-making. Under their watch, the department evaded basic privacy obligations and intrusive and biased Surveillance activities of questionable intelligence value.
These flaws don’t have to doom the OMB memo. Federal agencies should limit waivers and opt-outs to truly exceptional circumstances and ensure that the exercise of their discretion prioritizes public trust over expediency or confidentiality. OMB also needs to carefully scrutinize such decisions and ensure they are clearly explained to the public. If you find that waivers or opt-outs are being abused, you should reconsider whether they should be allowed.
Ultimately, however, the responsibility for enacting comprehensive protections lies with Congress, which can codify these protections and establish independent oversight over how they are enforced. The risks are too high and the harm is too great to leave large loopholes in place.
Amos To is a senior consultant at the Brennan Center for Justice.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.





