Artificial superintelligence is still a concept under discussion, but progress is being made constantly. What does the future hold when we manage to build an intelligence that surpasses human capabilities in all areas?
Max Tegmark, a physicist at MIT, raises concerns that our problems will only multiply once that reality is achieved.
Despite recognized dangers and a general reluctance, figures like OpenAI’s Sam Altman—part of the driving force behind the AI surge—are eager to see it realized, no matter the risks.
As Glenn Beck has pointed out, “Sam Altman believes he is creating God. … There are plenty of folks in Silicon Valley eager to meet this God they think they’ve made.”
Tegmark finds himself troubled by Altman’s bleak technological aspirations which extend beyond just creating superintelligence. In his 2017 essay titled “The Merge,” Altman suggests that merging humans with machines might be our only way to keep up with superhuman AI. He goes as far as saying we may soon be capable of “designing our own offspring.”
However, while many people wish to steer clear of this transhumanist direction, Altman and others in tech seem set on pushing us toward that future regardless.
“So how do we stop it?” Glenn inquires.
In a recent episode of “The Glenn Beck Program,” Tegmark shared four strategies for resisting the AI onslaught.
1. Question the notion that AI is “inevitable”
“These corporate lobbyists keep trying to convince us that it can’t be stopped,” Tegmark noted. “This is just the latest psychological trick.”
He points out that just because something can be done technologically doesn’t mean it should be. The example of human cloning—which is possible yet remains taboo due to ethical concerns—further illustrates his point.
The global consensus has been that tampering with our own biology could be dangerous, and such fears have thus far curtailed progress. The same may apply to ASI and cyborgs; although they could theoretically exist, the risks might outweigh the benefits if public opinion shifts against them.
2. Prefer Control Over Chaos
Some argue that the U.S. must remain competitive in the AI landscape because of pressures from places like China. But Tegmark sees this as a “suicidal race,” where once we reach superintelligence, we may find ourselves subservient to machines.
Interestingly, the underlying truth is that China values something more than tech superiority: stability. Under Trump, the U.S. has regained its footing as a superpower, which adds complexity to the competition. “Competing for supremacy shouldn’t mean losing control,” Tegmark cautions.
3. Push for Government Oversight
While Glenn worries about tech moguls like Sam Altman pushing AI forward unchecked, Tegmark argues that we need regulatory frameworks. He recalls a time when biotechnology lacked oversight, leading to disasters, like the thalidomide tragedy.
The drug, initially used for morning sickness, resulted in severe birth defects and ultimately propelled the government to regulate the biotech sector.
“We’ve implemented the same for other industries,” Tegmark reasoned. “To insist that AI companies alone operate without safety standards in the U.S. is essentially asking them for corporate handouts.”
4. Empower the Public’s Voice
Many individuals refrain from expressing their concerns about the AI race due to feelings of powerlessness or the fear of being labeled as outdated. But Tegmark believes that those fears are unfounded.
“Fewer than 5% of Americans actually support a race toward superintelligence,” he pointed out.
Now, our opinions can play a part. Through the Future of Life Institute, Tegmark has created a petition aimed at holding AI developers accountable for advanced AI threats. Notable figures from both political sides have already signed, including Glenn.
“We need you to sign this,” he urges.
“If we lose control of technology, we risk the end of humanity,” he emphasizes.





