New Delhi:
Google has greatly deviated from the previous commitment, which does not use artificial intelligence in weapons and monitoring fields, and has updated ethics guidelines in the same way.
The company's original AI principle of 2018 clearly prohibits AI applications in four fields. Weapons, surveillance, technologies that can cause overall harm, and can violate international law and human rights.
now, Blog postDemis Hassabis, the AI manager of Google, and James Manyika, a senior vice president of technology and society, have explained this change. They pointed out that the increase in AI's presence and the need for democratic companies to cooperate with government and national security.
“In the more and more complicated geopolitical landscape, the world's competition is being held for AI leadership,” said Hassabi and Marica. “I believe that democracy should be led by the core value of freedom, equality, and respect for human rights, and leads to the development of AI.”
Updated principles focus on human surveillance and feedback, and confirm that AI follows international law and human rights standards. Google also promises to test an AI system to reduce unintended harmful effects.
The change was a major change from Google's previous position, and in 2018 it attracted attention when the company faced an internal protest over the Pentagon Contract. Known as Project MavenIn the contract, the drone video was analyzed using Google AI. Thousands of employees signed a public letter that prompted Google to not be involved in military projects, and said, “Google should not participate in war business.” As a result, Google chose not to renew the contract.
Since Openai launched Chatgpt in 2022, AI has rapidly advanced, but the regulations are struggling to maintain pace. With this shift, Google has eased voluntary restrictions. James Marica and Demis Hassabis pointed out that the AI framework of the Democratic state has helped to shake Google's understanding of AI's risks and possibilities.