Artificial intelligence has rapidly gained attention but has also attracted considerable criticism. Progressives voice concerns over job losses, environmentalists raise questions about the ecological impact of massive data centers, and local activists seek assurances that the centers’ energy consumption won’t spike household utility bills. There’s also a broader worry that technology might outstrip human control.
This backlash is, at least in part, a reaction to the hype surrounding AI.
AI has its merits, but it’s not infallible or omniscient. It operates primarily by processing extensive amounts of data at remarkable speeds.
It’s essential to differentiate between intelligence, knowledge, understanding, and wisdom—these concepts highlight both the strengths and limitations of human and machine “intelligence.”
AI models are impressive and beneficial, though often baffling to most people. However, they aren’t foolproof.
Intelligence involves processing information into a coherent framework that adds or refines knowledge reasonably accurately. Knowledge is simply the structured accumulation of information, which helps in comprehension. Understanding goes further, involving the recognition of the significance or purpose of that knowledge.
Wisdom, on the other hand, comes from experience and involves the acknowledgment that intelligence, knowledge, and understanding have their limits and flaws, only useful in pursuing worthwhile goals.
About 2,500 years ago, the oracle of Delphi reportedly stated that no one was wiser than Socrates. Socrates himself was astonished by this, as it highlighted his own ignorance. After engaging with those perceived as knowledgeable—politicians, poets, philosophers, and artisans—he came to realize this piece of wisdom. People who claim to have knowledge tend to be blind to their ignorance, while Socrates was aware of his own lack of knowledge.
For this realization, Socrates faced execution for impiety and allegedly corrupting Athenian youth, thus exemplifying the folly of his accusers and the merit of his method of questioning.
Today, in our age of AI, we should similarly examine our understanding of machine “intelligence,” the knowledge they produce, and the role of technology.
AI is remarkably useful, yet its reliability depends on human users being encouraged to scrutinize its outputs and processes.
Humans make errors, and so do those who create and train AI. Interestingly, people often exhibit greater trust in machines over human judgment, particularly concerning complex information. For instance, tennis players are more inclined to trust electronic line calls than human ones, but discrepancies, like those involving incorrect ball markings, have led to a rethink of this trust.
As AI becomes more prevalent, people are likely to rely on it for everyday tasks, such as online searches. However, skepticism lingers regarding its capacity to perform more intricate tasks without human oversight.
It’s advisable to question the results generated by AI; errors frequently arise, even from simple searches.
Instances of AI errors, inaccuracies, and biases are not uncommon. A professor I know at Northwestern University recently sought advice from ChatGPT on evaluating investment options. ChatGPT suggested investing in a fund and detailed its performance, risks, and assets. When the professor went to invest, he found the fund was nonexistent. This situation illustrates the phenomenon known as “AI hallucinations.”
In my research for this article, one AI summary even quoted Socrates inaccurately, with no historical support for the claim.
Like human intelligence, AI is fallible and not always dependable, which is to be expected, especially considering its developmental stage. AI processes information swiftly but does not embody knowledge, understanding, or wisdom. It serves as a tool for organizing and making sense of vast information.
When properly understood, AI complements rather than replaces human intelligence and understanding. These limitations and imperfections in AI models serve as reminders that human intelligence also has its shortcomings. Humans organize the incomplete data they access in a subjective manner.
There’s a common expectation that the machines we design should possess intelligence superior to that of their creators—more objective, comprehensive, insightful. This expectation is a bit naive. AI can be “better” in the sense that it can analyze more information faster, but let’s not forget who built it. All AI models rely on imperfect data input by subjective humans.
What should we draw from this?
First, maybe those developing AI are misguiding the machines to treat human-related topics like math problems with definitive answers. Instead, perhaps these machines should be trained to formulate human-centric questions around various fields—politics, economics, psychology, and the arts—encouraging users to reflect rather than just provide answers.
Secondly, the teams training these AI systems should be upfront about inherent biases in how AI organizes and presents information. Personally, I think there’s merit in having American AI built within a distinctly American context.
Third, it’s crucial for AI developers to grasp the political, regulatory, and legal implications of overstating AI’s capabilities. For instance, should they be obliged to inform users about potential shortcomings or offer disclaimers?
Lastly, AI developers ought to contemplate the presence of misleading online information that serves specific political agendas and strive to enhance the quality of the training data. While achieving completely “unbiased” data is unrealistic, some sources can be significantly more accurate than others. Trainers need to make judicious choices regarding data sources.
Developing large-scale language models for AI represents a remarkable engineering accomplishment. While AI is extremely useful and likely to become essential, it remains a product of human ingenuity. We must remember that AI stands as yet another sophisticated, albeit flawed, tool developed by humans to improve life for humanity.





