SELECT LANGUAGE BELOW

Is it possible to protect ourselves from the negative aspects of AI?

There’s a lot of uncertainty about the direction of artificial intelligence, but that hasn’t stifled some excitement surrounding it. Close to 400 million users—which surpasses the entire U.S. population—are expected to engage with emerging AI applications. In fact, over 100 million rushed to sign up for ChatGPT within its first 60 days. One might think people would be more careful, maybe like when they buy a new microwave.

Technology is definitely enhancing our lives in many ways. But, it’s crucial to note that AI also has potential downsides, and we need to find a way to balance the good against the risks it poses.

We can’t turn back time; digital technology has already invaded our privacy to some extent. For years, we’ve unknowingly shared our personal information through web browsing, social media, entertainment apps, online shopping, and by quickly clicking “accept” on terms of use. Nowadays, people globally are making decisions about retina scans with tools like the World Orb from OpenAI’s Sam Altman. In exchange for a vague promise of identifying oneself as human in a machine-dominated world, we’ve essentially become dehumanized data vessels to be harvested and analyzed.

Businesses and governments, perhaps, no longer feel the need to pretend they’re asking for permission to access our data. They either take what they want or buy it from other sources. Freedom House reports that oppressive governments increasingly exploit AI, leading to a decline in global internet freedom over the past 13 years. Nations with less democratic practices are quickly utilizing AI as powerful tools to maintain political control, creating a populace that might be described as citizen zombies.

To really grasp where we might be headed, it’s beneficial to remember how far we’ve come. Despite animals being stronger or faster, human intelligence has been our defining edge. But it raises the question: are we really handing over pieces of that intelligence to machines, potentially allowing a new, higher form of non-biological intelligence to take over?

Concerns about machines taking control aren’t fresh. Think back to Stanley Kubrick’s 1968 film, “2001: A Space Odyssey,” where the AI, HAL, ultimately rebelled against its human operators. There were stories, albeit unclear, about Navy simulations where AI made strategic decisions that led to unforeseen consequences.

We should also recognize that rational versions of AI might not necessarily represent the pinnacle of technological advancement. After all, humans are the result of millions of years of evolution; other species thought they had arrived at their endpoint, but they were mistaken. Futurists, like Ray Kurzweil, have theorized about a future where biological and non-biological intelligence merge. Shockingly, about 50% of AI experts now believe there’s a 10% chance that intelligent machines could lead to human extinction, with AI Doomsday Clocks counting down to a time when AI takes over all decision-making.

Predictions of potential disasters shouldn’t be the only incentive for action. AI has already contributed to a rise in various criminal activities, including promoting organized crime through new avenues like deepfakes, cyberattacks, and uniquely terrifying forms of violent crime.

We all want to hope that the parade of AI fears won’t materialize simply because government agencies and entrepreneurs are in charge. Many companies have made admirable pledges to ensure their AI systems are safe, promising to apply strict guidelines. Some even proposed a temporary pause in AI development of six months. Still, I find myself questioning the validity of these promises.

Policymakers, unfortunately, seem indifferent to the potentially nefarious side of AI. So far, administrations and Congress have broadly accepted the deployment of current AI technologies without much discussion. As AI capabilities expand, so do concerns about a future filled with digital risks where well-intentioned individuals may struggle to keep things in balance.

Democracies must unite to search for alternatives and create a global consensus—much like the Bretton Woods Agreement of 1944, which established international financial management. We need to think about implementing global standards for internet governance that can effectively protect our online environments. Similar recommendations have been circulated by AI leaders.

As regulatory frameworks and processes for AI develop, innovation may understandably slow down. With increased surveillance, there could be sacrifices to individual freedoms. However, I think it may be wiser to take a step back to thoroughly understand these threats. It’s a delicate balance; compromising freedom today might help us define the boundaries better, rather than letting innovation race forward unchecked, resulting in a loss of autonomy for humanity. If other nations choose to advance without safeguards, so be it.

Ultimately, the well-being of our future generations hinges on how we harness AI’s potential, especially since countries like China are already ahead in certain technological domains, including AI, quantum computing, and 5G technology. We might still have the option to unplug our intelligent systems, but first, our leaders need to ensure that the U.S. retains control over AI and spearheads global standards that uphold democracy.

Thomas P. Baltanian is the executive director of the Centre for Financial Technology and Cybersecurity and the author of “Impossible Internet.”

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News