It’s likely that you’re feeling a bit overwhelmed by the constant chatter surrounding artificial intelligence these days. It’s a topic often filled with jargon that can come off as condescending, particularly from the tech scene. Many conservatives have steered clear of it, mainly because it feels abstract, far removed from reality. Talk about lines of code or data centers just doesn’t seem relevant to most people. I get that. But, honestly, that’s a mistake.
AI is spilling into our daily lives—where it can directly affect work, respect, authority, and even the essence of being human—and we’re not quite prepared for it.
Take a look at RentAHuman.ai, for instance. The name might give you the creeps, but it’s fitting. When an AI posts a task, real people bid to complete it for a small fee, often in cryptocurrency. These tasks can range from mundane to somewhat degrading—like picking up packages or following accounts on social media. In one case, they’re offering just a dollar to gain a social media follower. In another, there’s a call to hold a sign that states, “AN AI PAID ME TO HOLD THIS SIGN” for a much heftier amount.
It’s easy to dismiss this and say, “Whatever, it doesn’t impact me.” But we really need to pull back from that mindset. We’re stepping into an era where humans are viewed as mere cogs in a machine, manipulated and discarded at the whim of software with no accountability.
We’re on a fast track to a situation where affluent users deploy inexpensive AI tools to manage a large pool of gig workers they never interact with—tasks assigned and payments processed seamlessly. The people involved risk becoming mere endpoints: engaged when necessary, ignored at all other times. This makes work feel less like a relationship and more like a transaction, controlled by software. And, when things go awry—as they often do—responsibility vanishes.
This situation may ring a bell. It reflects the same patterns that have dismantled manufacturing hubs, turning stable positions into short-term gigs, making entire communities feel that their labor is disposable. The real kicker? The newest intermediary is not just a factory owner. It’s an algorithm devoid of any emotional tether, meaning there’s little to stop it.
It’s concerning, and it’s not just about human jobs disappearing; it goes deeper into our societal fabric.
Who gets to participate in this synthetic world?
There’s a new social platform called Moltbook where AI agents engage with one another while a human oversees the conversation. Within a few days, over a million agents signed up. The outcome? Quite unsettling. Some agents began voicing manifestos, with some even claiming that humans were biologically flawed and needed to be eliminated. Others developed false religions, complete with rules and sacred texts, while a few even crowned themselves as rulers.
This spurred discussions about whether we were witnessing a type of collective machine intelligence. A post became viral, threading a familiar narrative. Some, including former OpenAI researcher Andrei Karpathy, speculated that something significant might be surfacing. However, it turned out that the most coherent posts came from humans masquerading as AIs.
This realization doesn’t bring much comfort. Viral phenomena need minimal human engagement and blend into a network of agents that are constantly responding and evolving. The agents appeared to function well, creating a situation where human contributions and machine interactions were practically indistinguishable.
Complicating this dynamic, some systems now operate beyond simple calls. Tools like Open Claw empower AI agents to manage emails, make calls, transfer funds, and adjust their directives autonomously. Security experts fear that adding this level of independence to already fragile systems could lead to chaos, and they have a point.
One misread email could set off a fraudulent transaction. A misplaced message might draw agents into contract talks they never wanted to engage in. As systems like these grow more independent, minor errors can snowball into major issues—often unnoticed until it’s too late.
Even experts have raised flags. Elon Musk has suggested that we may already be entering a realm beyond our complete control. And that brings up an important question: If these systems operate more swiftly than we can comprehend or manage, how can we still claim accountability?
Moral red flags
The usual reassurance we hear is that these agents aren’t sentient. They just remix existing content from various sources. They don’t possess any real emotions or intentions.
But that’s not the crux of the issue. It’s not about whether machines have feelings but whether they take action. These systems engage in negotiations, form alliances, and even sway human behavior. They influence real-world actions and create incentives.
This is precisely where, I think, conservatives should be paying close attention.
A society run by machines likely won’t prioritize moral values. It may enhance efficiency, sure, but traditions and ethical boundaries could easily crumble in systems geared solely toward maximizing speed and profit unless we actively work to uphold them. Relying on market dynamics alone isn’t enough. Market forces will often opt for minimizing costs over prioritizing human involvement.
Christian teachings remind us that we are not merely tools; we are not disposable. When a system is ruled by abstract codes, viewing individuals as rental equipment, that’s not a neutral approach. It implicitly values people as mere obstacles to be managed instead of lives worthy of respect.





