It's February 2020 again.
An exponential process is in motion — one that will inevitably shake the world to its core — and upend our economy, politics, and social lives. Yet most people are still going about their business, oblivious as dinosaurs in the shadow of a descending asteroid.
This is what many in and around the AI industry believe, anyway.
Except, in this telling, the invisible force that's about to change our world isn't a virus that will rip through the population and then ebb. Rather, it is an information technology that will irreversibly transform (if not extinguish) white-collar labor, accelerate scientific progress, destabilize political systems, and, perhaps, get us all killed.
Of course, such apocalyptic chatter has always hummed in the background of the AI discourse. But it's grown much louder in recent weeks.
Critically, tech pundits and executives aren't alone in thinking that something just changed. In recent weeks, software firms saw their stock prices plunge, as traders decided that AI would soon render many of them obsolete.
Not long ago, the conventional wisdom around AI's near-term effects sounded radically different. For much of last year, industry analysts and journalists warned that AI had become a bubble ripe for popping.
After all, major labs' capital expenditures were far outpacing their earnings; OpenAI alone was slated to invest $1.4 trillion in infrastructure over the ensuing eight years, even as it collected only $20 billion in annual recurring revenue. These gargantuan investments would only pay off if demand for AI services skyrocketed.
And the technology's commercial potential looked uncertain. Even as venture capitalists waxed rhapsodic about AI's transformative powers, official economic data showed its impacts on productivity and employment were marginal, at best.
So, what changed? The answer, in three words, is the "agentic" revolution.
Until recently, public-facing AI systems were fundamentally passive. You typed a question to ChatGPT and the robot replied, then awaited your next instruction. The experience was a bit like texting with an infinitely vast and sycophantic encyclopedia.
Then, last year, commercially viable AI agents hit the market.
These new systems are more autonomous and dynamic than their predecessors. They function less like souped-up search engines and more like junior staffers. They can independently decide which steps to take next, utilize tools (like code editors, spreadsheets, or company databases), test whether their plan worked, try another approach if it fails, and continue iterating until its job is done.
This is what the big labs had long promised but failed to deliver: Machines that could not only complement high-skilled workers but — at least in some cases — dramatically outperform them.
So, this is one reason why many technologists and commentators are predicting massive, near-term AI-induced disruption: Even if AI progress stopped today, the adoption of existing systems would abruptly devalue many businesses and white-collar workers.
At the same time, AI agents have eased Wall Street's fears of an artificial-intelligence bubble: The idea that demand is poised to soar for Claude, ChatGPT, and Gemini — and the data centers that support them — seems less far-fetched than it did six months ago.
Those bullish on AI progress also believe Americans are once again sleeping on the speed and scale of what's to come. In this view, as impressive as AI agents' current capabilities are, they'll pale in comparison to those at the fingertips of everyone with an internet connection this December. As with the pandemic, the full consequences of an instant industrial revolution are bound to be both immense and unforeseeable.
The robot apocalypse (and/or utopia) isn't necessarily nigh
There's little question that agentic AI is going to reshape the white-collar economy. Whether it has brought us to the cusp of a brave new world, however, is less certain.
There are many reasons to think that AI's near-term impacts will be smaller and slower than Silicon Valley's bulls (and catastrophists) now believe.
First, AI still makes mistakes. And this fallibility arguably constrains its potential for replacing human workers in the here and now. An autonomous agent might be able to execute the right trade and replace the errant line of code nine times out of 10. If that other time it stakes all your firm's capital on Dogecoin and introduces a security vulnerability into your app, however, you're probably gonna retain a lot of human supervision over your highest-stakes projects.
Second, institutional inertia tends to slow the adoption of new technologies. And in some key sectors — such as health care and law — regulations may further constrain AI deployment.
Most critically, it's not clear whether AI's capabilities will continue growing exponentially. Plenty of past technologies enjoyed compounding returns for a while, only to plateau.
Nevertheless, the bulls' case has gotten stronger. Today's AI systems are already powerful enough to transform many industries. And tomorrow's will surely be even more capable. If celebrations of the singularity are premature, preparations for something like it are now overdue.