SELECT LANGUAGE BELOW

Robby Starbuck resolves legal dispute with Meta over defamation by AI chatbot

Robby Starbuck resolves legal dispute with Meta over defamation by AI chatbot

Robby Starbuck, a conservative figure, has reached a settlement with Meta after suing the company, asserting that an AI chatbot had made derogatory claims about him.

Under the terms of the settlement, Starbuck will act as a consultant for Meta, collaborating with the Product Policy Team to address political bias in the AI model. He will also help enhance ongoing efforts to minimize the spread of misleading information generated by chatbots.

In a joint statement, both parties expressed satisfaction with the resolution. They noted that since addressing these critical issues, Meta has significantly improved the accuracy of its AI and reduced ideological and political bias.

“I’m pleased with the outcome and believe this sets a precedent for ethical AI across the industry,” Starbuck commented.

Starbuck initially filed his lawsuit in April in Delaware Superior Court after the chatbot allegedly mischaracterized him as a “white nationalist” arrested during the January 6th events, leading to a defamation claim.

According to reports, the chatbot also suggested that right-wing influencers like Starbuck should lose custody of their children, labeling them as dangerous.

The activist became aware of the chatbot’s harmful statements about him in August 2024, when users on X shared instances of the misinformation generated online.

These instances falsely connected Starbuck to the January 6th riots and claimed he was involved in QAnon conspiracy theories, as well as being anti-vaccine.

Curious about the truth, Starbuck investigated further and found more inaccuracies from the chatbot, which ultimately led him to pursue legal action after attempting to reach a resolution with Meta.

Concerns about bias in AI have been highlighted across various platforms from different companies.

For example, Google Gemini faced criticism for its portrayal of Memorial Day and for generating racially insensitive images of historical figures. Furthermore, ChatGPT, developed by OpenAI, was known to refuse requests to praise Donald Trump while accepting similar requests for Kamala Harris or Joe Biden. OpenAI has since implemented measures to address bias in its model.

Meta has acknowledged that rectifying political bias in their AI has long been a priority. They pointed out that major language models tend to reflect a left-leaning perspective based on the training data available online, and are committed to achieving a more balanced representation of diverse viewpoints.

A conservative activist noted his intention to leverage his new role at Meta to advocate for fairness across the AI landscape, hoping that his initiatives will influence the broader industry.

“Collaborating with a technology leader like Meta represents a crucial step toward creating equitable products. I believe our improvements in AI training could set a new standard across the industry,” he said, optimistic about establishing an example of fairness.

Since Donald Trump’s presidency, Meta has made efforts to address perceived anti-conservative bias within its operations. In January, the social media company announced changes to its diversity, equity, and inclusion policies.

Additionally, Joel Kaplan, a former Republican political consultant and ex-deputy chief of staff under President George W. Bush, was brought on as the top Global Affairs Officer. Kaplan indicated that discontinuing the DEI initiatives would help build a more talented team.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News