Meta’s Nick Clegg plays down AI’s threat to global democracy | Artificial intelligence (AI)

Meta’s Nick Clegg says generative AI has been overhyped as a risk to elections, arguing that the technology can help defend democracy rather than attack it.

Speaking at the Meta AI Day event in London on Tuesday, the social network’s head of global affairs said evidence of major elections around the world already this year is that large-scale language models, images and video Generators, speech synthesis tools are not actually used to subvert democracy.

“We are right to be cautious and cautious,” Mr Clegg said. “But it is surprising that these tools have not been used systematically to actually try to subvert and interfere with the large-scale elections that have already taken place this year in Taiwan, Pakistan, Bangladesh, and Indonesia. It is the right thing to do.

“When it comes to bad content, we encourage everyone to think of AI as a sword, not just a shield. There’s one big reason why places like Instagram and Facebook are getting better and better at reducing unwanted and malicious content. It’s AI.”

Clegg added that Meta is working with other companies in the industry to further improve these systems. “The level of industry cooperation is increasing, especially this year with an unprecedented number of elections.”

However, things are likely to change next month due to Meta’s own actions in space. Clegg said the company plans to launch Llama 3, its most advanced GPT-style large-scale language model, in the coming weeks, with a full release expected by summer.

Unlike many of its peers, Meta has previously released these AI models as open source, with few restrictions on their use. This makes it harder to prevent reuse by malicious parties, but allows outside observers to better scrutinize the system’s accuracy and bias.

Mr Clegg said: “One of the reasons the entire cybersecurity industry is built on open source technology is that when you apply the wisdom of the crowd to new technology, you often focus on potential flaws in proprietary systems. You’re just relying on one corporate entity to play whack-a-mole.”

Yann LeCun, Meta’s lead AI scientist and one of the three people known as the “godfathers of AI,” believes that the more pressing risk to democracy from AI is that a small number of closed models It was claimed to be a potential control. “In the near future, all of our interactions with the digital world will be done through AI assistants,” he predicted. “If our entire digital diet is mediated by AI systems, then we need AI systems to be diverse for the same reasons we need a free and diverse press. All AI systems is biased in some way and trained on specific data.

“Who is going to cater to all the languages, cultures, values ​​and interests of the world? This cannot be done by a few companies on the West Coast of the United States,” LeCun said.