Reskilling Alone Won’t Save Us From the AI Shock

Reskilling helps, but the real test lies in building sovereign compute, research depth, and new institutional responses to AI-led change.

iStock.com
Article related image
Author
Srinath Sridharan

Dr. Srinath Sridharan is a Corporate Advisor & Independent Director on Corporate Boards. He is the author of ‘Family and Dhanda’.

Author
Anand Venkatanarayanan

Anand Venkatanarayanan is co-founder and Chief Technology Officer at DeepStrat

February 18, 2026 at 6:09 AM IST

Is the death of software engineering as a profession already here? Large language models can generate code that mostly works when given detailed instructions, and the capability has advanced far enough that firms may soon ask a blunt question: why hire large cohorts of entry-level programmers for routine tasks at all?

The first-order effect is easy to see. What is being unsettled is not only a job category, but the organisational architecture that has structured modern software work for decades.

Programming has always been, at its core, a translation profession. First, there exists a problem in the world that needs solving, one that is economically viable over time. That problem must be researched, analysed, understood, and written down in a specification detailed enough for an engineer to act on. The engineer then translates that specification into code, accurate enough to represent what was intended. The code is written in high-level languages, supported by frameworks and libraries that accelerate the process, and is then compiled into machine-level instructions executed by CPUs and GPUs. This repeated translation from messy reality into machine precision is inherently lossy. It is prone to error, misunderstanding, and what we now call hallucination. That gap between human-ambiguity and computational-exactness is precisely what made programming fascinating, difficult, and remunerative.

Over time, that complexity also created hierarchies designed to manage it: project managers, delivery leads, product managers, account managers. Entire organisations specialised in ‘coordinating human effort’ to translate real-world uncertainty into working systems, competing on scale, cost, and execution discipline. But what if this human organisational model is no longer the best way to translate problems into machine language?

If AI systems can do much of the translation directly, the question is no longer only about productivity. It becomes about what the new model of work will be, and what roles human beings will occupy.

The current software model specialises roles. A product manager understands customers but rarely writes production-quality code because of human bandwidth. A programmer writes code but rarely goes out into the field to interact with customers. A project lead manages multiple programmers and solves coordination problems.

In the newer model, there may be no need for programmers at all, if the problem is not hard enough. A project lead may instead find herself managing multiple autonomous agents operating from written specifications, coordinating outputs that include lossy compression and the familiar problem of LLM hallucination. In such a world, she may not know the low-level details of the generated code. In many cases, she will never be able to review it meaningfully. She will track outcomes instead: tests passed, features shipped, errors reduced.

The first iteration of this role revision is almost surreal. The project lead becomes an ‘agentic zookeeper’, managing a menagerie of agents, models, prompts, and tools, all stochastic. The role may feel liberating at first, enabling operation at a higher level of abstraction, issuing commands to intelligent systems that can be summoned at will. She remains responsible for the actions of these ‘superhuman ghosts’. Soon enough, she will be asked not only to manage them but to understand them, to reason about their failure modes, and to build guardrails that govern behaviour.

This produces the next iteration of the role: ‘AI psychologist’. A distinct skill emerges, not in writing code, but in shaping the outcomes of systems that do not reason as humans do, ensuring their behaviour remains within tolerance limits demanded by real-world businesses and social consequences. This will be a rarer skill in population terms, and far smaller in number. Only a few well-funded corporations will be able to afford such roles at scale.

In the interim, human jobs dependent on the older organisational model may reduce sharply, with effects that ripple far beyond the technology sector. India will be one such society where the consequences could be more pronounced. Unlike China, it lacks an industrial base that can absorb labour displacement. Unlike the US or Europe, it is not yet a foundational research power in frontier AI. Unlike America, it does not enjoy the exorbitant privilege of dollar hegemony that cushions structural transition.

India’s digital rise has largely been that of a service economy, built on labour arbitrage and the long strategic openness of the Pax Americana era. That period is now fading. The global technology order is fragmenting into blocs of compute, models, export controls, and strategic denial. We are entering an age where intelligence itself becomes an industrial input, and where the countries that own models and infrastructure will shape not only markets, but labour futures elsewhere.

Training and upskilling, while necessary, will not offset job losses at scale. This will compress intake of new students into these fields. The third-order effect could be cultural: a generation concluding that the ladder of mobility is narrowing.

Isaac Asimov could imagine a robo-psychologist in fiction. What he did not imagine was that societies might need human psychologists in greater numbers because machines became too central, too quickly. Yesterday’s project leads are already beginning to morph into early ‘Susan Calvin’s’. That is how quickly the ground has shifted in months.

India’s response must therefore be structural. It must build sovereign AI capacity, indigenous compute infrastructure, and deeper research ecosystems. Education must move away from routine coding toward systems thinking, model governance, and human judgment. Industrial policy must treat AI as national capability, linking talent, chips, platforms, and trust into a coherent strategy. The understated risk is that the profession that built India’s digital rise, past few decades may not exist in mass form soon.