SELECT LANGUAGE BELOW

An obscure figure from the Civil War era offered us a contradictory warning. Can we take it seriously today?

An obscure figure from the Civil War era offered us a contradictory warning. Can we take it seriously today?

The Jevons Effect and the Rise of AI

Back in 1865, economist William Stanley Jevons observed something intriguing about the industrializing world, particularly the clouds of smoke over England. At the time, people believed that increasing the efficiency of coal usage would conserve it. However, Jevons noted the opposite: as steam engines became more efficient, coal became cheaper, and rather than using less, the demand surged.

This insight evolved into what we now call the Jevons effect, a principle in economics we often overlook. Essentially, when a resource becomes easier and cheaper to consume, we tend to use more of it, not less. We find innovative ways to utilize it and, instead of slowing down, we fill our newfound capacity with even more tasks.

As a result, the workforce remains in motion.

We’re standing on the edge of another pivotal moment—maybe the most significant since the advent of the steam engine. The era of AI has arrived, promising efficiencies that could transform knowledge work similarly to how mechanization changed manual labor. There’s a lot of talk about how AI will liberate us from tedious tasks, allowing for more time to focus on what’s important.

What Have We Learned?

This conversation feels familiar. Back in 1930, John Maynard Keynes predicted that technological advancements would reduce the average workweek to 15 hours by 2030. He imagined a world so productive that we would choose leisure over labor. Fast forward to 2026, and it looks like he missed the mark. We didn’t see those profits translate into more leisure time; instead, we integrated them into our services and products.

The history of computing acts as a guide to our current AI transition. Initially, mainframe computers, which were rare and expensive, served Fortune 500 companies. Yet, from the era of minicomputers to the now-ubiquitous personal computers, we never really deemed the computing challenge “solved,” even despite falling costs and increasing efficiency. With each price reduction, we introduced an avalanche of new computers. Nowadays, the cloud has removed barriers to entry, allowing small businesses to utilize software that was once only available to large corporations.

As low-level coding transformed into high-level programming languages, developers didn’t write less code; they created more, tackling challenges that were once seen as insurmountable. Today, we have more software engineers than ever, despite the existence of open-source libraries and cloud platforms automating substantial portions of development. This efficiency has enabled software to infiltrate nearly every aspect of our lives.

Now, we have large language models (LLMs) and coding agents, which lower the “cost of trying.” Tasks that once required a full team, like analyzing complex customer data or developing prototypes, can now be handled by a single entrepreneur.

Shifting Roles

Take Boris Cherny, for instance; he created Claude Code and used it to submit 259 pull requests and modify 78,000 lines of code in just a month. All of that output came from Claude Code. This isn’t about cutting down on the workforce; it’s about one individual amplifying their output to match a larger team’s capabilities. As the barriers to launching software projects and marketing initiatives shrink, companies are approving projects that would have previously been set aside.

Consequently, the workforce doesn’t slow down. We’re moving toward a shift where humans transition from producers to orchestrators, becoming like “gardeners,” nurturing and managing fleets of AI agents. Our roles are expanding; one person might now oversee tasks that once required several individuals. However, those let go from positions won’t just find free time. Instead, they’ll manage their own AI agents across multiple projects, pushing the boundaries of what’s possible. This represents a compelling instance of the Jevons effect, revealing a latent demand for knowledge work.

In the U.S., where the culture is steeped in growth and innovation, this trend—turning efficiency into more work—is striking. Employment in marketing, for instance, has increased fivefold in the last fifty years. During this period, tools like Photoshop and Google Ads made certain tasks easier in some respects. Efficiency has had a profound impact, transforming marketing from a niche role into a fundamental aspect of every business, giving rise to entirely new subfields.

Should We Care?

The risk, however, lies in blurring the line between “could” and “should.” The lasting lesson of the Jevons Effect is that just because we can do something more efficiently, it doesn’t mean it’s worth doing. Technology helps us perform tasks faster, but doesn’t necessarily guide us in determining if those tasks hold any real value. Amid this evolution to monitoring and adjusting the output of AI, we must still evaluate whether the results are addressing important issues. Ultimately, human judgment is crucial. As possibilities widen, it’s our responsibility to discern which goals are genuinely deserving of our attention.

We’re not headed for that predicted 15-hour workweek; instead, we’re moving into a world packed with projects, where increased efficiency diminishes the cost of work but escalates the volume of work we tackle. The coal is cheaper, the fires burn hotter, and we’re working harder than ever.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News