Meta’s AI assistant and Google’s search autocomplete feature have come under intense scrutiny for providing inaccurate information in connection with the recent assassination attempt on former President Donald Trump. While Google claims it is working to improve the feature, Meta has offered a different excuse to cover up the assassination attempt: that its AI was “hallucinating.”
The Verge Reports The revelation that Meta’s AI assistant covered up the assassination attempt on former President Donald Trump has raised concerns about the reliability of AI-generated responses to real-time events and their potential impact on public information dissemination.
Meta’s head of global policy, Joel Kaplan, addressed the issue in a company blog post published on Tuesday. Kaplan described the AI’s response as “disappointing” and explained that Meta had initially programmed the AI to not answer questions about the assassination attempt. However, this restriction was later lifted as users began to notice the AI’s silence on the subject.
Despite the adjustments, Kaplan acknowledged that “in a small number of cases, Meta AI continued to provide incorrect answers, sometimes claiming that events never occurred.” He assured the public that the company was “promptly addressing” these inaccuracies.
Meta AI doesn’t provide any details about the attempted ass*s*nation.
We are witnessing in real time the suppression and cover-up of this incident, one of the most serious.
It’s completely unreal. pic.twitter.com/BoBLZILp5M
— TikTok Library (@libsoftiktok) July 28, 2024
Kaplan attributes these errors to a phenomenon known in the AI industry as “hallucinations,” a term that describes instances when an AI system generates false or inaccurate information, often with a high degree of confidence. Kaplan noted that hallucinations are “an industry-wide problem for all generative AI systems and an ongoing challenge to how AI will process real-time events going forward.”
Meta executives also emphasized the company’s commitment to improving its AI systems, saying, “As with all generative AI systems, models may return inaccurate or inappropriate outputs. As these systems evolve and more people share their feedback, we will continue to address these issues and improve these features.”
The controversy over AI’s response to the assassination attempt on President Trump is not unique to Meta: Google was also caught up in the situation, having to deny claims that its search autocomplete feature was censoring results related to the incident, a claim that sparked a strong response from former President Trump himself, who used his Truth Social platform to accuse both companies of trying to rig the election.
Click here for details The Verge is here.
Lucas Nolan is a reporter for Breitbart News covering free speech and online censorship.
