Microsoft has strengthened its stance on the use of generative AI for facial recognition by U.S. law enforcement through its Azure OpenAI service, a managed enterprise solution built on OpenAI technology.
tech crunch report Microsoft clarified in a recent update to its Terms of Service that it prohibits the use of its integration with Azure OpenAI Service for facial recognition purposes “by or on behalf of” U.S. police departments. . This ban also applies to current and potential future image analysis models developed by OpenAI.
This updated policy also addresses law enforcement agencies around the world and specifically addresses the use of “real-time facial recognition technology” in mobile cameras, such as body cameras and dash cams, to identify individuals in uncontrolled environments. It is prohibited to use “.
New York, NY – May 6: (Editor’s note: Image contains profanity.) During the “Justice for Jordan Neely” protest that began outside the Broadway-Lafayette Station. A protester gestures to New York City police officers at the Lexington Avenue/63rd Street subway station. May 6, 2023 in New York City. (Photo by Alexi Rosenfeld/Getty Images)
These changes come on the heels of Axon announcing a new product that uses OpenAI’s GPT-4 generated text model to summarize audio from body cameras. Critics were quick to highlight potential problems with the application, including the tendency of generative AI models to fabricate facts (known as hallucinations) and the introduction of racial bias from training data. Critics say the latter is particularly concerning given the disproportionate number of people of color stopped by police compared to white people.
While it remains unclear whether Axon was using GPT-4 through its Azure OpenAI service and whether the updated policy is in direct response to a product launch, the move is a sign that AI-related law enforcement and consistent with Microsoft and OpenAI’s recent approach to defense contracts.
However, the new terminology leaves room for interpretation. The complete ban on the use of Azure OpenAI services applies only to U.S. police forces and does not apply to international law enforcement agencies. Additionally, the use of facial recognition by U.S. law enforcement is strictly prohibited, although facial recognition performed on fixed cameras in controlled environments such as back offices is not covered.
This stance is consistent with Microsoft and OpenAI’s recent work with government agencies. Reports in January revealed that OpenAI was working with the Department of Defense on a variety of projects including cybersecurity capabilities, marking a shift from the previous ban on providing AI to the military. Meanwhile, Microsoft is proposing the use of OpenAI’s image generation tool “DALL-E” to support the Department of Defense’s software development for military operations.
read more Click here for TechCrunch.
Lucas Nolan is a reporter for Breitbart News, covering free speech and online censorship issues.





