SELECT LANGUAGE BELOW

European lawmakers pass AI Act, world’s first comprehensive AI law

European lawmakers have approved the world’s most comprehensive bill on artificial intelligence to date, setting out sweeping rules for developers of AI systems and new limits on how the technology can be used.

The European Parliament passed final approval of the law on Wednesday, after reaching a political agreement with European Union member states in December last year. The rules, which will take effect gradually over several years, will ban the use of certain AI, introduce new transparency rules and require risk assessments of AI systems deemed high risk.

Members of the European Parliament participate in a vote during plenary session at the European Parliament in Strasbourg, eastern France, on March 13, 2024. (Frédéric Florin/AFP via Getty Images/Getty Images)

The law comes amid a global debate about the future of AI and its potential risks and benefits, as AI technology is increasingly adopted by businesses and consumers. Elon Musk recently sued OpenAI and its CEO Sam Altman, accusing the company of violating its founding agreement by prioritizing profits over AI’s benefits to humanity. Altman said AI should be developed with great care and has immense commercial potential.

The new law applies to AI products on the EU market, regardless of where they were developed. The regime is backed by fines equal to up to 7% of a company’s global revenue.

Elon Musk predicts AI will likely be smarter than ‘all humans combined’ by 2029

The AI ​​Act is “the world’s first regulation that provides a clear path towards the safe and human-centered development of AI,” said Brando Benifay, an Italian EU lawmaker who helped lead negotiations on the law.

The law still requires final approval from EU member states, which have already given political approval to the bill, so the process is expected to be a formality.

Although the law only applies within the EU, it is expected to have a global impact as large AI companies are unlikely to give up access to the EU, which has a population of around 448 million people. Other jurisdictions may also use this new law as a model for AI regulation, contributing to the ripple effect.

“Anyone who wants to create or use an AI tool has to follow that rulebook,” said Guillaume Counesson, a partner at law firm Linklaters.

Several jurisdictions around the world have introduced or are considering new rules regarding AI. Last year, the Biden administration signed an executive order requiring major AI companies to notify the government when they develop models that could pose significant risks. Chinese regulators have set rules focused on generative AI.

joe biden kamala harris white house

Vice President Kamala Harris introduces President Joe Biden during an event on the administration’s efforts to regulate artificial intelligence in the East Room of the White House on October 30, 2023 in Washington, DC. (Getty Images)

The EU’s AI law is the latest example of the EU’s role as an influential global rule-maker. Other competition laws that went into effect against some tech giants earlier this month are forcing Apple to change its App Store policies and Alphabet’s Google to change how it presents search results to users in the region. . Another law focused on online content requires major social media companies to report on what efforts they are taking to combat illegal content and misinformation on their platforms.

ticker safety last change change %
AAPL Apple. 171.16 -2.07 -1.19%
google Alphabet Co., Ltd. 140.77 +1.15 +0.82%

The AI ​​Act does not take effect immediately. The law’s prohibitions include bans on the use of emotional recognition AI in schools and workplaces and untargeted image scraping for facial recognition databases, and are expected to take effect later this year. Other duties will also be phased in from next year until 2027.

The new rules will ultimately require providers of general-purpose AI models trained on large datasets to power more specialized AI applications to provide up-to-date technical documentation for their models. You should also publish a summary of the content you used to train your model.

Former Google consultant says Gemini happens when AI companies “get too big too fast”

Manufacturers of the most powerful AI models deemed to have what the EU calls “systemic risk” will have them undergo cutting-edge safety assessments and regulate serious incidents that occur with them. You are required to notify the authorities. Mitigation measures must also be taken to address potential risks and cybersecurity protections, the law states.

The bloc’s first proposed bill was published in 2021, before OpenAI’s ChatGPT and other AI-powered chatbots were widely available, and general-purpose AI provisions were added during the legislative process.

Chat GPT from OpenAI

Before ChatGPT and other AI chatbots became popular, European legislators began drafting new AI laws for the EU. (Nicolas Economou/NurPhoto via Getty Images / Getty Images)

Industry groups and some European governments opposed the introduction of blanket rules for general-purpose AI, arguing that legislators should focus on the risky uses of AI, rather than the models that support its use.

France and Germany, home to Mistral AI, sought to water down some of the bill’s proposals. Mistral CEO Arthur Mensch recently said the AI ​​Act would become a manageable burden for the company after final negotiations to reduce some obligations. Even though he believes the law should remain focused on how AI is used and not prioritize society’s interests. Underlying technology.

Lawmakers said the AI ​​Act was one of the most heavily lobbied bills the bloc had handled in recent years.

US-funded report issues urgent AI warning about ‘out of control’ systems attacking humans

Business watchdog groups and some lawmakers want the bill to include stricter requirements, including safety assessment and risk mitigation rules, for all general-purpose AI models, not just the most powerful ones. said.

Lobby group Business Europe said on Wednesday that it supports the law’s risk-based approach to AI regulation, although there are questions about how it will be interpreted in practice. Digital rights group Access Now said the final text of the bill was riddled with loopholes and failed to adequately protect people from some of the most dangerous uses of AI.

Another element of the law calls for clear labeling of so-called deepfakes, which refer to images, audio, or videos that are generated or manipulated by AI and appear to be real. AI systems deemed high-risk by lawmakers, such as those used for immigration or critical infrastructure, must undergo risk assessments to ensure they are using high-quality data. There are also other requirements.

CLICK HERE TO GET FOX BUSINESS ON THE GO

European lawmakers said they were working to make the bill flexible to adapt to rapidly evolving technology. For example, part of the law states that the European Commission, the EU’s executive arm, can update the technical elements of the definition of a general-purpose AI model based on market and technology developments.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News