Rethinking Human-Generated Content
It seems that human-generated content may be on its way out, according to Henry Ager, a recent AI consultant for Meta. He predicts a future inundated with what he’s calling “AI Slop”—essentially low-quality, noisy, and derivative material, including text, videos, and even influencers, all powered by algorithmic generation.
This shift aims to boost engagement, but at the cost of creativity and quality of the information we consume.
Meanwhile, Meta finds itself caught in a familiar struggle: trying to remain relevant amid the rapidly evolving AI landscape. It’s a bit of a vicious cycle. The more AI outputs you generate, the more tools you need to manage all that output.
If Ager’s insights hold true, navigating the internet could soon become even more frustrating. And, well, how long this will last is anyone’s guess—even Ager’s.
“The slop era is inevitable. I don’t know what to do.”
This line of logic isn’t entirely profound, but it’s certainly not off the mark. It kind of reminds me of how military operations are always about keeping up—if they get ahead of us, we’re done for.
This urgency to compete in such strenuous market conditions does seem deeply rooted in Meta’s rather checkered history and its leader, Mark Zuckerberg. There’s almost a child-like desperation here—wanting to fit in and be cool, while feeling left behind.
A Decade of Desperation
Think back: Meta’s journey begins with Facebook, followed quickly by the acquisition of Instagram. There’s no denying the profound influence these platforms have had on how we pay attention and connect with each other, for better or worse.
Using data that was often gathered under dubious circumstances, Zuckerberg’s shift towards AI seems almost inevitable. By 2012, when Instagram was acquired, machine learning was already in the spotlight. That year, Google also invested heavily in AI through Google Brain.
From 2014 to 2018, the landscape of artificial intelligence began to mature. OpenAI emerged, while Europe introduced General Data Protection Regulations to safeguard privacy rights. During this timeframe, Zuckerberg and Facebook racked up numerous violations regarding data privacy as they expanded by acquiring WhatsApp and Oculus
What’s the approach? Well, they added AI content moderation. It’s evident that even those at the top lack a grand, visionary plan. It often feels like a series of hasty decisions aimed at securing their market position.
In 2021, the “Zuckerberg-a-verse” was rebranded as Meta. At this point, Facebook began to resemble a chaotic jumble of ads, scams, and bots. The exodus of Gen X and younger users from this platform is a significant point often referenced in conversations about the decline of social media. The pivot to “Meta” signals a shift toward AI, although it appears unfocused.
The Llama Evolution
Perhaps Zuckerberg simply had to set the direction for his peers. By 2022, Meta rolled out an open-source AI model called Llama, but it wasn’t until 2023 that serious changes began due to underwhelming project performance.
This year, to elevate its position in the AI sector, Meta made notable hires from OpenAI and DeepMind, apparently racing to develop a comprehensive AI project aimed at achieving superior intelligence.
Meta’s efforts have impacted its various platforms—Facebook, Instagram, and WhatsApp, as well as a blend of metaverse and virtual reality products—with varying levels of success and significance. Their explorations into advanced AI models highlight an evasion of the unpredictable nature of human creativity.
Should Zuckerberg heed Ager’s warning about the looming “slop” phase, he might need to react rapidly, fortifying AI systems to address the fallout precipitated by existing models.
Warning Signs Ahead
This is a broader issue; it’s not just Facebook facing turmoil—it’s the internet overall. We’re staring down a potential deluge of AI-generated content.
- Predicted Category 5 Slop Storms could hit in the latter half of 2025.
- 50% of social media content might be automated or AI-generated.
- This shift will likely lead to a prevalence of low-quality text, images, and videos.
- Trust in content could deteriorate as no clear standards emerge for distinguishing between human and AI-generated material.
- Meta’s own tools may dominate the landscape, worsening the slop level.
- The market for generative models designed to skirt AI security checks is set to expand.
- Users and businesses may scramble for cryptocurrency solutions to verify authenticity.
Even though it’s clear that Bitcoin could provide a scalable method for distinguishing between human and automated content, Ager emphasizes that “the era of slops is inevitable.” His resignation is palpable as he admits, “I don’t know what to do about it.”
While we may be witnessing yet another twist in the ongoing story of the internet, there’s still a glimmer of hope. Perhaps, if larger platforms crumble under their own convoluted schemes, smaller, curated spaces will flourish, allowing for more genuine human interactions.
There’s little room for a unified argument against low-quality content or large tech entities. I, for one, would support fresh initiatives led by founders interested in creating platforms focused on genuine interaction rather than mere numbers and grand ambitions.
