Former Google CEO Eric Schmidt has predicted that the most powerful artificial intelligence systems will be housed in military bases surrounded by machine guns in the United States and China.
“We suspect that eventually both the United States and China will have a small number of extremely powerful computers with autonomous inventive capabilities beyond what they would want to give out to their own citizens or competitors without permission. I think so,” Schmidt said. He told Noema Magazine He said in an interview published on Tuesday.
The former Google chief, who led the search engine from 2001 to 2011, said AI systems will acquire knowledge at a very rapid pace in the coming years and will eventually “start working together.”
Mr. Schmidt’s net worth is Bloomberg Billionaires Index Rating 33.4 billion and is an investor in Amazon-backed AI startup Anthropic.
He said the proliferation of knowledge about AI in the coming years will pose challenges for regulators.
“Here we come to the question posed by science fiction,” Schmidt said.
He defined an AI “agent” as a “large-scale language model.”[s] Learn something new. ”
According to Schmidt, “It’s reasonable to expect that these agents will become so powerful that there will be millions of them.”
“That means you have a lot of agents running around and being available for you.”
He then pondered the consequences that the agent would “develop.”[ing] They use their own language to communicate with each other. ”
“At that point, you no longer understand what the model is doing,” Schmidt said, adding: “Should we pull the plug?”
“It would become a real problem if agents started communicating and behaving in ways that we humans couldn’t understand,” the former executive, 69, said. “That’s where it ends, in my opinion.”
“A reasonable expectation is that we will enter this new world within five years, not within 10,” Schmidt said.
He added that tech companies are working with Western governments to regulate new technologies.
Schmidt said the risks are minimal because Western companies working with AI are “well-managed” and therefore “at less risk of litigation.”
“They’re not going to wake up in the morning and think of ways to hurt someone or harm humanity,” he says.
But Schmidt warned that there are “evil people out there who will use your tools to hurt people.”
“All technologies are dual-use capable,” he says. “All of these inventions can be misused, and it’s important that the inventors are honest about that.”
Schmidt said the problem of spreading misinformation through AI and deepfakes is “unsolvable.”
“There are ways to try to regulate, but the cat is out of the bag and the genie is out of the bottle,” he said.
Last year, a group of technology leaders from OpenAI, Google DeepMind, Anthropic and other research labs warned that future AI systems could pose a more deadly threat to humanity than pandemics or nuclear weapons.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics or nuclear war,” the nonprofit Center for AI Safety said in a statement.