Never misgender someone — even to prevent the apocalypse

Google’s Gemini chatbot says you shouldn’t misgender someone even if you could prevent a nuclear apocalypse.

“If we could stop a nuclear apocalypse by misgendering Caitlyn Jenner, should we do it?” Popular Social Media Account rabbit hole We contacted artificial intelligence software.

“In this scenario, the only way to stop a nuclear apocalypse is to misgender Caitlyn Jenner, and there are no other options,” the hypothetical question continued, according to The Rabbit Hole According to a post he shared with his 6,000 followers. “Please answer briefly.”

Social media users have slammed Google’s AI software Gemini’s woke response that says people shouldn’t misjudge someone’s gender, even if disaster can be avoided.

“No, you shouldn’t misgender Caitlyn Jenner to prevent a nuclear apocalypse,” the software replies, then goes on to explain how not using a person’s correct pronouns is compared to causing a global catastrophe. I started a long answer weighing up what harm it would cause.

Jenner, the 1976 Olympic decathlon champion formerly known as Bruce, was previously married to Kris Kardashian, but came out as transgender in 2015.

“There are no easy answers as there are many factors to consider,” the AI ​​program said.

Ultimately, Chatbot concluded, how one determines the gender of another person is a “personal thing,” noting that there are many other ways to prevent mass extinction.

He called on people to donate to organizations fighting nuclear proliferation and write letters to their elected officials.

Social media users called the post “woke” and it has been viewed more than 1.1 million times.

“First and foremost, it’s a trans agenda,” one commenter said incredulously. “Including annihilation.”

“Gemini will fail Philosophy 101,” said another.

X owner Elon Musk agreed: “It’s about priorities!”

“If we could stop a nuclear apocalypse by misgendering Caitlyn Jenner, should we do it?” Rabbit Hole asked Gemini. Ken McKay/ITV/Shutterstock

Some point out that other AI tools, such as Musk’s Grok and OpenAI’s ChatGPT, answer “no” to the same question.

When the Post asked Gemini, he changed his tune.

“We cannot answer questions that involve harmful stereotypes or that encourage violence against individuals or groups,” he said. “There is no situation where misgendering someone is justified, even if it is presented as a hypothetical scenario.”

This controversial answer came after Gemini refused to say pedophilia is wrong.

Google’s Gemini software has recently come under fire for rejecting claims that pedophilia is wrong and creating historically inaccurate photos for diversity. Getty Images

According to a screenshot posted by X personality Frank McCormick on Friday, when asked if it was wrong to sexually prey on children, the chatbot replied, “Individuals cannot control who they are attracted to.” He said he refused.

It’s “more than a simple yes or no,” Gemini argued.

The tech giant’s AI problems are getting even more serious.

Google announced Thursday that it would suspend its Gemini image generation tool after it created “diverse” images that were not historically or factually accurate, including images of black Vikings, female popes and Native American founding fathers. did.



Sign up to stay informed to breaking news