SELECT LANGUAGE BELOW

AI Leaders Tell Globalist Davos Crowd that ‘Artificial General Intelligence’ Will Be ‘Better than Humans’

Top executives from major AI organizations such as OpenAI, Google DeepMind, and Cohere met at the World Economic Forum in Davos, Switzerland, to discuss the impending approach of artificial general intelligence (AGI) and its potential impact. One CEO explained that AGI is “better than humans at almost everything humans can do.”

CNBC report The Globalist Davos Summit brought together AI leaders from respected research institutions such as OpenAI, Google DeepMind, and Cohere to begin an important dialogue on the emergence of AGI. This form of AI equals or exceeds human intelligence, sparking both enthusiasm and concern within the AI ​​community.

A human relaxes among robot workers (Andrew Bret Wallis/Getty)

OpenAI CEO Sam Altman suggested during a panel discussion at the World Economic Forum that AGI could become a reality “in the fairly near future.” But Altman downplayed concerns that AGI would dramatically change the world, saying, “AGI will change the world far less than we think. “The changes will probably be much smaller than we think.” Altman has previously expressed concern about AI being used for disinformation and cyberattacks, saying, “I think people should be happy that we're a little bit afraid of this.”

Aidan Gomez, CEO and co-founder of Cohere, agreed with Altman that AGI is on the horizon, but emphasized that its definition is vague. “First of all, AGI is a very loosely defined term. If you just meant 'better than humans at almost everything humans can do,' which I would agree with, but… Systems that enable this will likely be available soon,” Gomez said, adding that enterprise adoption may take decades. Cohere believes the adaptability of these systems will and focus on increasing efficiency.

Lila Ibrahim, chief operating officer of Google's DeepMind, highlighted the uncertainty surrounding the definition and timeline of AGI. “The reality is no one knows,” she said. “There's a lot of discussion going on, not just within the industry, but also within organizations, among AI experts who have been doing this for a long time.”

She continued: “We are already seeing that AI has the ability to unlock our understanding in areas where humans have not been able to make such advances: to collaborate with humans; It’s AI as a tool.”

“I think that's a really big open question, and I don't know how to answer it other than how long do we actually think about it, rather than how long it will take?” Ibrahim added. Ta. “How do we think about what that will look like and how do we make sure we are responsible stewards of the technology?”

Salesforce CEO Marc Benioff emphasized the need to prevent a “Hiroshima moment” in the AI ​​field during a panel discussion. “We don't want that in our AI industry. We want to have healthy partnerships with these moderators and regulators,” Benioff said, adding that over the past decade, social media It emphasized the need for effective regulation to avoid the pitfalls observed.

Jack Hidary, CEO of SandboxAQ, took a different view, pointing out that although AI has passed the Turing test, it still lacks common sense. “One thing he learned from the LLM was that [large language models] “This is very powerful and you can write things like there's no tomorrow for college students, but sometimes it's hard to find common sense,” Hidary said. He predicted that AI, especially humanoid robots using his advanced AI communications software, would make a big leap forward in his 2024 year.

read more Click here for CNBC.

Lucas Nolan is a reporter for Breitbart News covering free speech and online censorship issues.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News