SELECT LANGUAGE BELOW

Republicans’ push for action against Charlie Kirk’s social media faces challenges

Republicans' push for action against Charlie Kirk's social media faces challenges

The assassination of Charlie Kirk has sparked an unusual call from some Republicans for improved content moderation on social media. Recently, at least two GOP lawmakers urged platforms to take down graphic footage of the incident. One suggested a lifetime ban for those who celebrated Kirk’s murder.

A week after these requests, it appears they haven’t gained much momentum in Congress. Experts in social media believe it’s unlikely that tech companies will enact changes without a clear mandate or incentive.

“There’s clearly a necessity to rethink our approach to these issues,” remarked Brendan Kerr from the Federal Communications Commission during the Politico Tech and AI Summit. However, he expressed concerns about government-mandated censorship, recalling the Biden administration’s attempts to combat Covid-19 misinformation during the pandemic, which had long been a sore point for Republicans.

“I believe individual users should make their own moderation choices,” he stated, advocating for tools that enable users to curate their online experiences.

Kerr also ignited a discussion about the First Amendment after threatening ABC affiliates in response to comments made by Jimmy Kimmel about Kirk’s death. This situation highlights the conflicting reactions among Republicans following Kirk’s tragic killing, especially regarding self-critique on social media.

Not long after the shooting incident at Utah Valley University, the details spread rapidly across platforms like X, TikTok, and Meta. Rep. Anna Paulina Luna (R-FLA.) urged these platforms to remove the footage. On September 11, the day after the incident, she expressed, “We must respect our lives. Don’t let this go unnoticed.” She later mentioned that TikTok had cooperated with her request to eliminate the violent content.

Following the news of Kirk’s murder, many social media users posted comments celebrating it, provoking outrage among Republican officials and influencers. These online responses have already led to the dismissal of various public sector employees, including teachers and military personnel, as campaigns to expose such attitudes gained traction.

Rep. Clay Higgins (R-La.) echoed similar sentiments on September 11, asserting that social media companies should take action against those who mocked Kirk’s assassination. “I will use all my influence to ensure an immediate ban on any posts or comments disrespecting Charlie Kirk’s death,” he wrote.

Despite their vocal responses, neither Luna nor Higgins introduced any legislation aimed at regulating social media platforms last week and they did not respond to inquiries about the matter.

White House trade advisor Peter Navarro took it a step further by calling out tech billionaire Elon Musk this week on social media. He challenged Musk to clean up the platform known as X and to eliminate anonymous posts entirely.

According to Raqib Hameed Naik, executive director of the Organized Centre for Hate Research, Kirk’s assassination exposed a significant gap in content moderation, allowing harmful materials to proliferate unchecked. “When such content begins to circulate, it’s rare that a moderation system can respond effectively in time,” he noted.

Many social media platforms have either removed the graphic footage or restricted access to it, raising issues about the ease with which content can go viral. This has been flagged by researchers like Naik and Ramesh Srinivasan from UCLA.

The debate about Kirk’s assassination has brought forth discussions about the limitations of existing content moderation policies and whether they adequately address the spread of harmful material. Some Republicans have even revisited the idea of repealing Section 230 of the Communications Act, which currently shields social media companies from liability for user-generated content. Senator Lindsey Graham highlighted the dangers that unmoderated content poses during a recent hearing, while FBI director Kash Patel echoed calls to repeal these legal protections.

Yet, Republican leaders seem tentative about making definitive legislative changes after Kirk’s murder, and the party’s historical stance on free speech complicates the potential for new restrictions.

In contrast, during the pandemic, Democrats were able to convince social media platforms to restrict disinformation, especially following the January 6 Capitol riot. The GOP had previously opposed such censorship, which led to the reinstatement of certain accounts, including Donald Trump’s, and reduced the overall scale of moderation.

Following the outrage over Kirk’s murder, it seems unlikely that tech companies will alter their policies significantly. Attempts to reach out to X, YouTube, Meta, Discord, and TikTok about potential policy changes yielded no response from X and Discord, while spokespeople for YouTube, TikTok, and Meta stated they were enforcing community guidelines and removing violations but did not commit to preventing similar violent videos from spreading in the future.

“It’s a harsh reality that platforms profit from such events, a structural issue that runs deeper than mere content moderation,” explained Julia Hawk, a peacebuilding researcher.

Labia Iyer, managing director of the University of Southern California Institute of Psychology, noted that the government should not take on the role of regulating social media content. “I don’t believe it’s the government’s responsibility to police speech online,” Iyer remarked.

This raises the question: What measures can be undertaken to shield individuals from harmful content online? Daniel Fessler, an anthropology professor at UCLA, suggested that new laws are needed to address these concerns. Hawk advocated for legislation that would hold companies liable for the harm caused by content on their platforms, particularly in the case of the graphic video. She pointed to the European Union’s digital services law, which imposes fines on corporations that fail to control illegal content.

However, enacting similar laws in the U.S. might prove challenging, particularly in the current political climate. Iyer argued that such regulations would be viewed as overreaching and detrimental to First Amendment values.

Another potential avenue for regulation could involve overseeing algorithms used by social media companies, which often prioritize content that garners high engagement, even if it’s shocking or violent. So far, only a few U.S. states, like New York and Utah, have proposed laws targeting social media algorithms. The Safe for Kids Act in New York seeks to limit how addictive features are applied to children’s accounts, while Utah’s proposed legislation would allow minors and their parents to take legal action against platforms for algorithmic harms.

Rep. Alexandria Ocasio-Cortez (D-N.Y.) criticized engagement-driven algorithms in the aftermath of Kirk’s assassination, asserting that tech platforms are aware that conflict generates higher user engagement and should be held accountable for this knowledge.

Yet, comprehensive regulations remain a distant goal, with Srinivasan expressing doubt that the recent Republican push for content moderation will lead to meaningful change. “I don’t see a clear consensus from either party on how to approach content moderation,” he concluded.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News