Beyond the Hype: Why AI May Not Deliver as Advertised

AI is being priced as destiny. History suggests technological revolutions travel a slower, more uneven path than markets assume.

iStock.com
Article related image
Author
By V Thiagarajan

Venkat Thiagarajan is a currency market veteran.

March 4, 2026 at 2:26 PM IST

One was reminded of Future Shock by a blog post from Citrini. It’s an apocalyptic story that lays out a doomsday scenario in which the success of artificial intelligence turns out to be a catastrophe for the wider economy. The post went viral and sent tremors through financial markets. The memo painted a vivid picture: AI replaces white-collar labour faster than the economy can absorb it, consumer demand collapses, unemployment spikes past 10%, and the S&P draws down nearly 40%. They call it “Ghost GDP,” where output shows up in profits but doesn’t circulate because displaced workers have lost their income.

Markets, however, may be conflating a long-term structural shift with near-term certainty. That dislocation creates two types of opportunity: mispriced growth exposures in public markets and private-market picks-and-shovels that win regardless of the macro path. We shall examine the underlying assumptions behind these narratives and why they may not hold.

Technological innovations are widely considered the main source of economic progress, yet they have historically generated anxiety. It is tempting to dismiss such concerns by saying that people have always worried about new technology. It is often, however, said that one should not see the road ahead through the rear-view mirror. That is true, but sometimes it is different.

One of the pioneers of the internet, Scott Bradner of Harvard University, once described the internet revolution as “the platform for all subsequent revolutions”. The same could be said of AI. It is a general-purpose technology and, as seen with computers, the innovations it enables will spark revolutions across multiple domains. To paraphrase the thesis of Collingridge’s The Social Control of Technology, when a technology is in its infancy and can be controlled, we don’t understand its consequences; by the time we do, it is so widespread and entrenched that control becomes difficult. 

History repeatedly reminds us to prepare for the unexpected.

The emerging consensus suggests that this Fourth Industrial Revolution (4IR) could deliver higher productivity, lower employment intensity and diminished wages.

Investors are pouring money into a wave of AI SaaS startups racing to turn raw model power into practical, user-facing products. Most of these startups don’t build their own models; instead, they build on top of platforms like OpenAI and Anthropic. Their value lies in the interface—wrapping the tech in user experiences that solve problems, from enterprise workflows to consumer apps. Every major tech company is investing staggering amounts of capital to expand its infrastructure.

Yet the world has not fully grasped the speed or breadth of this new revolution.

The well-known phrase “nobody ever got fired for buying IBM” offers a partial historical analogue to the current frenzy around AI. IBM, while expensive, was a recognised leader in automating workplaces, ostensibly to the advantage of those corporations. The company famously re-engineered the environments where its systems were installed, ensuring that office infrastructure and workflows were optimally reconfigured around computers rather than the other way around. Similarly, AI firms now argue that we are entering not merely an era of adoption, but one of proactive adaptation to their technology.

AI Implications
Debates about AI tend to focus on two time horizons: the near term, with its real-world risks and opportunities, or a distant future portrayed either as a utopian paradise or a dystopian hellscape.

First, AI is generally seen as a source of significant productivity gains. By increasing efficiency and improving decision-making, it could boost consumer demand and create new revenue streams. Some macroeconomic models anticipate a significant acceleration in global growth in the medium term.

Second, as firms rely more on technology, automation, and higher-value activities, economic growth may increasingly come from higher output per worker rather than the absorption of additional labour. This implies that future recoveries can be economically robust even if employment growth is more limited than in previous expansions.

When returns on capital exceed returns on labour, firms invest in machines rather than hire workers. The only way people remain competitive, therefore, is by accepting lower wages. As Jason Furman, chairman of the US Council of Economic Advisers under President Obama, observed in 2016: “The traditional argument that we do not need to worry about robots taking our jobs still leaves us with the worry that the only reason we will still have our jobs is that we are willing to do them for lower wages.”

Productivity Gains
Views on technology’s economic impact span a wide spectrum. At one end are pessimists who assert that the digital economy’s productivity boost faded in the decade before the financial crisis. At the other are technology optimists such as Ray Kurzweil, who foresee a technological “singularity” with potential exponential economic growth.

Another technology futurist, Peter Diamandis, has argued that such a singularity could lead to an era of abundance: “Within a generation, we will be able to provide goods and services once reserved for the few to anyone who needs or desires them.”

AI is also expected to overcome “Baumol’s disease” — the long-standing difficulty of raising productivity in labour-intensive service sectors such as health and education.

Many studies suggest AI adoption may provide a one-time boost to productivity. A more important question is whether it can accelerate the growth rate of productivity. Yet despite increasing adoption of AI and other digital technologies over the past decade, productivity growth across many developed economies has been relatively slow. This appears paradoxical given aggressive corporate adoption. Duolingo has replaced its contract workers, while Shopify has reportedly instructed teams to treat AI as the default option before hiring additional staff. 

The most remarkable economic fact about the ICT revolution was famously summed up in a quip by Robert Solow: “You can see the computer age everywhere except in the productivity statistics.” Solow was an academic, so he observed first-hand how computers transformed work. Secretarial pools that once typed academic manuscripts disappeared. Telephone calls were replaced by email. Academic papers that once had to be mailed to journals could now be uploaded instantly online. Yet despite these profound changes, productivity statistics did not immediately reflect them.

For example, Baily and Gordon (1988) found that between 1979 and 1987, productivity in the non-electrical machinery industry — including computer production — rose nearly 12% per year. Yet aggregate US productivity growth was over 1.5 percentage points lower than in 1948-1973.

Hence, the belief that AI reliably boosts individual productivity across most contexts and user types is rather overhyped.

Labour Markets
AI could also have highly disruptive social and economic consequences. It may foster dominant global firms that concentrate wealth and knowledge while transcending national borders. It could widen the gap between developed and developing economies and increase demand for certain skills while rendering others redundant.

A new chapter in the AI story may therefore emerge—one defined less by the mega-cap leaders investing in AI and more by the broader industries reshaped by AI. This trend could have far-reaching consequences for labour markets, with a potential to increase inequality as well as productivity while exerting downward pressure on wages.

Over time, wages tend to readjust upwards as the inventions that shake the labour market make their way through the economy, and public policy works to redistribute the gains. But this process can be extremely protracted. British workers, for instance, did not see substantial real wage gains during the early Industrial Revolution until the 1840s, roughly 60 years after the labour upheaval began.

Since Schumpeter (1942), economists have distinguished between process innovations, which tend to reduce employment, and product innovations, which create entirely new products and jobs. Pianta (2004) showed that the balance between the two has varied across time and space. Luckily, past breakthroughs like electricity served both roles. While they automated existing processes, they also created new products and industries.

Nineteenth-century economists such as David Ricardo or James Steuart, who worried about technological unemployment, could scarcely have imagined that technology created more jobs than it displaced.

For example, Jeremy Rifkin (1995) described the spread of technology as “a deadly epidemic inexorably working its way through the marketplace, the strange, seemingly inexplicable new economic disease spreads, destroying lives and destabilising whole communities.” He cited a union leader who predicted that within thirty years, as little as 2% of the world’s current labour force would be needed to produce all necessary goods. That prediction has not come to pass (yet).

It would make enormous sense to stay optimistic that at least the next several decades will not be characterised by widespread mass unemployment. While truly human-level or better artificial intelligence may indeed be such a game changer that history provides little guidance, this seems to be quite far off at present.

Wage Effects
If labour markets evolve towards more intellectually demanding roles, wages should theoretically rise in tandem. When job descriptions change and workers contribute more value, compensation should follow.  

Moreover, workers’ pay would be increasingly shaped by opaque algorithms and artificial intelligence systems, shifting compensation decisions away from human managers, clear legal standards, and collective bargaining. This phenomenon—known as algorithmic wage discrimination or surveillance pay—was first documented in app-controlled ride-hail and food-delivery work. It is now spreading to a range of other industries and services.

Unlike traditional wages negotiated through contracts, machine-learning-driven pay can be generated in real-time. That makes income variable, uncertain, and inscrutable to the workers who rely upon it.

The narrative that AI inevitably leads to a predetermined future of universal prosperity is therefore overly simplistic.

Amara’s Law teaches us that we tend to overestimate the impact of technology in the short-term, during the hype phase of development, but substantially underestimate its effect in the long-term as new ideas, products, processes, and business models proliferate.

Similarly, Gartner’s Hype Cycle methodology consistently shows that every technological breakthrough moves from a “Peak of Inflated Expectations” to a “Trough of Disillusionment.” The current frenzy may simply be accelerating that familiar cycle. 

In summary, newer technologies might disrupt faster thanks to integrated supply chains, faster R&D and larger investments. Yet their economic consequences are unlikely to unfold in a neat or linear fashion.

Even as we look ahead, markets may be interpreting the future through the rear-view mirror.