Robbie Starbuck Sues Google for Defamation
Conservative activist Robbie Starbuck has launched a defamation lawsuit against Google, claiming that the company’s AI tools inaccurately connected him to sexual assault allegations and labeled him a white supremacist.
Starbuck, known for challenging diversity, equity, and inclusion initiatives in America, filed the suit on Wednesday in Delaware Superior Court, seeking over $15 million in damages. He asserts that misleading information generated by Google’s AI has harmed his public image.
According to the lawsuit, Starbuck became aware of inaccuracies in Google’s AI-generated content in 2023 while using Bard, one of Google’s early AI models. The complaint states that this AI incorrectly asserted that Starbuck had associations with Richard Spencer, a well-known white supremacist. Starbuck took to the social media platform X to voice his concerns, urging his substantial following to consider the implications of AI determining critical decisions like loan approvals.
Moreover, the lawsuit alleges that new AI tools from Google produced further false claims about him, including accusations of sexual assault. Google spokesperson Jose Castañeda recognized that generating false information is a “well-known problem” for large-scale language models, highlighting a commitment to transparency in addressing these inaccuracies.
This isn’t the first legal confrontation for Starbuck regarding AI-related defamation. He previously sued Meta in April, claiming that their AI tools incorrectly stated he was involved in the January 6, 2021, Capitol riot.
In August, reports indicated that Starbuck had settled his lawsuit with Meta. The nature of the settlement appears to involve Starbuck assuming a consulting position with the company, focusing on mitigating political bias in AI and reducing the generation of fabricated information by their tools.
Starbuck has gained notoriety for urging significant companies to cease DEI policies and environmental initiatives. His ongoing lawsuit against Google and Meta raises important legal considerations about accountability for AI-generated information.
Up to now, no U.S. court has awarded damages related to defamation by an AI chatbot. A recent case in Georgia favored OpenAI’s ChatGPT in a defamation suit filed by conservative radio host Mark Walters. Claire Nolins, director of the First Amendment Clinic at the University of Georgia School of Law, suggests that as more similar cases arise, new legal precedents regarding AI accountability may be established.
