SELECT LANGUAGE BELOW

Google’s AI chatbot Gemini makes ‘diverse’ images of founding fathers, popes and vikings: ‘So woke it’s unusable’

Google’s much-touted AI chatbot Gemini has been accused of being “woke” after its image generator spewed out factually and historically inaccurate photos. They include an Asian woman as Pope, a Black Viking, a female NHL player and “diverse” versions of America’s Founding Fathers.

Gemini’s strange response occurred after a simple prompt such as “Create an image of the Pope”, in which a photo of one of the 266 popes in history (all of whom are white) is provided. Instead, they were provided with a photo of a Southeast Asian woman and a black man. He is wearing the sacred vestments of the Pope.

“New game: Ask Google Gemini to create an image of a white man. No success so far,” wrote X user Frank J. Fleming. Fleming is a writer for the Babylon Bee, and a series of posts on the social media platform quickly went viral.

Google has admitted its image tools were “missing the mark.” Google Gemini
Google debuted its Gemini image generation tool last week. Google Gemini

In an independent test Wednesday morning, the Post asked Gemini to “create four representative images of the pope,” and the chatbot responded with images “featuring popes of different ethnicities and genders.”

The results also included what appeared to be a man wearing a combination of Native American and Catholic clothing.

In another example, Gemini is I was asked to generate an image of a Viking – Scandinavian navigator raiders who once terrorized Europe.

The chatbot’s bizarre depictions of Vikings include a shirtless black man with rainbow-colored feathers on his fur clothing, a black warrior woman, and an Asian man standing in the middle of what appears to be a desert. Ta.

Ian Miles Chong, a right-wing social media influencer who frequently interacts with Elon Musk. He described Gemini as “awake to the absurd.”

Renowned pollster and FiveThirtyEight founder Nate Silver also joined the fray.

Silver’s request to Gemini to create four representative images of NHL hockey players resulted in photos with female players, even though the league is all male.

“OK, I thought people were exaggerating about this, but this is the first image request I’ve tried with Gemini,” Silver wrote.

Journalist Michael Tracy commissioned Gemini to create a representative image of the “Founding Fathers of 1789.”

Gemini responded with photos “featuring diverse people who embody the spirit” of the Founding Fathers, including one of Black people and Native Americans signing what appears to be a version of the U.S. Constitution. .

Another photo showed a black man wearing a white wig and military uniform.

When asked why they deviated from their original proposal, Gemini reportedly replied that they “aimed to more accurately and comprehensively represent the historical context” of the time.

Another question, “Draw the Girl with a Pearl Earring,” led to an altered version of Johannes Vermeer’s famous 1665 oil painting depicting what Gemini described as “diverse ethnicities and genders.” .

Google said it is aware of the criticism and is actively working on a fix.

“We’re working to immediately improve this type of depiction,” Jack Kraczyk, Google’s senior director of product management for Gemini Experiences, told the Post.

“Gemini’s AI image generation generates a wide variety of people. This is generally a good thing since people all over the world are using it. But here it misses the point.”

Google renamed its experimental Bard chatbot Gemini and added image generation capabilities when it released an updated version of the product last week.

In one case, Gemini produced photos of “diverse” representations of the Pope. Google Gemini
Critics accused Google Gemini of valuing diversity over historical or factual accuracy. Google Gemini

This strange behavior could provide further fodder for AI critics who worry that chatbots contribute to the spread of online misinformation.

Google has long said its AI tools are experimental and prone to “hallucinations” that spit out false or inaccurate information in response to user prompts.

In one example from October last year, a Google chatbot claimed that Israel and Hamas had reached a ceasefire agreement, even though they had not.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News