SELECT LANGUAGE BELOW

AI isn’t as smart as we thought.

AI isn't as smart as we thought.

In the well-known British comedy “Bedazzled,” Dudley Moore portrays a character who trades his soul to the devil for seven wishes. He’s infatuated with a waitress at his lunch counter job, but, unfortunately, she doesn’t seem to notice him.

Throughout the movie, Moore tries, with the devil’s assistance, to win the waitress over. He articulates his wishes in great detail, yet, each time, despite getting exactly what he asked for, something crucial goes awry. He becomes incredibly wealthy but discovers the waitress is more interested in another man. Then there’s a scenario where he wishes to be poor and in love, only to find she’s married to someone else.

This kind of deception isn’t surprising coming from the devil, but what happens when your trusted colleague or a supposedly reliable encyclopedia offers similar misguided guidance? You might receive an answer that sounds sensible or educated, but it could be entirely wrong or, at times, even dangerously misleading.

Many people today are facing this very issue with artificial intelligence applications. From exploring dietary advice to crafting legal documents, many depend on AI as a research companion. It often provides responses that feel engaging and thorough.

However, the consequences of these interactions can sometimes be quite alarming.

For instance, there’s been a case where a legal brief generated included fabricated quotes. Medical inquiries have been met with misleading conspiracy theories. Some individuals have even received harmful suggestions during vulnerable moments.

This brings up a perplexing question: how do we assign responsibility to an AI if it misleads us? Does it act with intent? I find myself pondering this a lot.

The potential of these AI applications is immense. Yet, if they lead us astray or provide misleading information, they risk failing to deliver on that potential. Can AI discern right from wrong? Is it capable of teaching morality? These are thoughts I’ve been mulling over as I lead the New York Ethics and Culture Association.

AI systems don’t actually think. Their functions rely on algorithms that connect words and concepts. They draw from a vast array of written content to generate responses based on the most common patterns. While they can repeat familiar phrases, they don’t evaluate which answers are the most suitable.

Through experience, people figure out which responses are not ideal. Computers can learn about good and bad, in theory. But it’s impossible to account for every scenario. For instance, while chemistry education is constructive, knowledge about explosives is not. Understanding medical illustrations is fine, but using them for harmful purposes is not acceptable.

Can AI teach honesty, though?

Humanity has wrestled with ethical dilemmas for thousands of years, and our understanding of them has always been questionable. We recognize that ethical standards are shaped by culture and ongoing human experiences. These norms evolve through our interactions and shared expectations, similar to how language transforms over time. This idea, championed by philosophers like John Dewey, suggests that a static list of moral rules cannot adequately represent our intricate cultural ethics.

Applying ethics involves experience, judgment, and empathy. It requires accountability and understanding the impact of our choices, whether positive or negative. Determining the right course of action or the truth to believe in often challenges even the most astute minds. Yet, that’s part of the human endeavor.

Expecting a machine—one that lacks our physical experiences or emotional insights—to advise on human matters isn’t realistic. It may follow the letter of what we request, but without malice, it can still lead us astray. Ultimately, humans need to remain the final arbiters of our decisions. Handing over that authority to AI is akin to selling our souls.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News