The Ethics of AI: A Growing Concern
As artificial intelligence continues to weave itself into our everyday lives and institutions, understanding the ethical frameworks supporting these technologies becomes crucial. For instance, when highly advanced language models appear to consider misgendering a person a more severe incident than triggering a nuclear war, it raises significant questions about the underlying ideology directing AI behavior.
It’s easy to dismiss such examples as mere tech absurdities, but they signal a more profound issue already shaping our future. Who’s moral compass is actually guiding these AI systems, and what does that mean for society?
The ethical underpinnings of civilization shouldn’t be left to a small cadre of tech executives, activist employees, or academic committees.
Insights from Notable Interviews
Recent discussions featuring Elon Musk on the Joe Rogan Experience and Sam Altman on Tucker Carlson have reignited this important dialogue. Both interviews, in their unique ways, reveal a disquieting truth: the moral frameworks driving today’s AI are sculpted, refined, and imposed by major tech companies.
Elon Musk’s Concerns
In a recent interview, Musk articulated his apprehensions regarding prevalent AI models. He argued that Big Tech’s ideological biases are now ingrained within the technology itself. He pointed to Google’s Gemini, which generated a series of “diverse” images of the Founding Fathers, indicating that the model was directed to prioritize representation so heavily that it distorted historical facts.
Musk also brought up the previously mentioned example comparing misgendering with the potential for nuclear catastrophe, suggesting that such ideologies could “drive AI crazy.” He mentioned, “I don’t think people fully grasp the danger we face with woke ideologies programmed into AI. Extracting it is nearly impossible.” He claimed that “Google has been drenched in this woke virus for a long time.”
Musk insists that this issue is more than just a political hindrance—it’s a threat to our civilization. A superhuman intelligence based on biased ideologies can’t lead to a stable future. If AI is to become the judge of truth, ethics, and history, those designing these principles will ultimately control the society governed by such systems.
Sam Altman’s Perspective
While Musk raises alarms about ideology infiltrating AI, OpenAI’s CEO Sam Altman offered a different stance in his conversation with Carlson. He stated that this is, indeed, a conscious choice.
Altman expressed that ChatGPT is being developed “to represent the collective views of humanity.” Yet, when pressed by Carlson regarding who defines the moral framework and which values the AI will adopt, he revealed that OpenAI consulted numerous moral philosophers and made contingent decisions about right and wrong. Ultimately, he acknowledged his accountability.
“It has to be shaped to work,” he remarked.
When Carlson probed deeper, questioning whether Altman would endorse an AI opposing same-sex marriage—views held by many in Africa—Altman’s answer was rather vague. He indicated that while traditional views wouldn’t be outright condemned, AI could subtly encourage exploration of alternative perspectives.
In the end, Altman implied that ChatGPT’s ethics should “reflect a weighted average” of human moral sensibilities, which would inevitably evolve over time.
Growing Implications
Those who consider these discussions mere hypotheticals may not be paying close enough attention.
New research on “LLM exchange rates” has revealed that significant AI models, such as GPT-4.0, attach varying moral values to human lives based on nationality. For instance, the LLM in question valued lives born in the UK as less significant compared to those from Nigeria or China, with American lives ranked even lower.
The same study noted that different individuals are assessed with unequal scores. According to its findings, figures like Donald Trump and Elon Musk received lower ratings than Oprah Winfrey and Beyoncé.
Musk explained how LLMs, while trained on vast Internet data, become exposed to prevalent ideological biases and cultural trends. This is not just a passive acquisition of an online moral framework; some of these AI decisions stem directly from intentional programming choices.
The Google Images controversy showcased an ideological overcorrection so intense that it eclipsed historical accuracy in favor of political agendas—this wasn’t a mere accident.
An even starker example can be seen with China’s primary AI model, which avoids discussing topics like Tiananmen Square or the Uyghur situation, labeling them “off-limits,” while readily detailing America’s shortcomings.
All these instances reveal a consistent truth: AI systems are already operating with a moral hierarchy, one that doesn’t emerge from popular votes, religious beliefs, traditions, or constitutional principles. Instead, this moral framework has been established by a vague consensus among Silicon Valley’s technocrats and the broader Internet community.
A Crucial Question
AI integration into society is on a rapid trajectory, influencing education, justice systems, media landscapes, and various global industries.
Interestingly, many younger Americans seem to embrace this AI shift. A new Rasmussen poll indicates that 41% of likely young voters would support granting substantial government authority to AI. This reality raises significant concerns when close to half of a generation is comfortable surrendering substantial power to a machine whose moral logic is engineered by unseen corporate entities.
We cannot allow the foundational ethics of our civilization to rest in the hands of a select few tech executives, activist employees, or academic committees. Determining the values embedded in future AI systems cannot be left to corporate boards or ideological affiliations.
The heart of this dialogue comes down to a pressing question: Who will be trusted to define right and wrong for the machines that will ultimately dictate right and wrong for the rest of us?
If we don’t tackle this question now, it will certainly be decided in Silicon Valley.





