By TK Arun
T.K. Arun, ex-Economic Times editor, is a columnist known for incisive analysis of economic and policy matters.
July 28, 2025 at 8:29 AM IST
The Economist surveys the current state of Artificial Intelligence in its latest issue. AI holds out exciting promise of vastly enhanced productivity, leading to accelerated economic growth, albeit with destruction of jobs that do not call for discerning human intervention.
At the same time, it brims with the menace of total destruction — imagine a terrorist armed with do-it-yourself instructions on how to mass produce cultures of the Ebola virus or the anthrax bacteria, not to speak of AI that goes rogue and decides to wipe out these inferior intelligences that seek to control it and even terminate it. AI is indeed the demon lover incarnate.
AI is evolving fast. The initial large language models only recognised patterns, and put together phrases picked out from the large database to which they had access, to answer queries put to them. Now, we have a few reasoning models, equipped with an ‘inference engine’, capable of logical sequencing and simulating rational human thought, based on the data each has access to and can process at speed.
OpenAI’s O-series models, Google’s Gemini 2.5 Professional, Anthropic’s Claude Opus, DeepSeek’s R1 and Musk-owned X AI’s Grok 3 are reasoning models. We are not very far from Artificial General Intelligence, or AGI, which is pretty much capable of what a human can do. Estimates vary as to when AGI will be upon us, some think as early as in 2027.
Why don’t we just sit back and cheer? Grok gave a short-lived demonstration of what could possibly go wrong. Responding to an account with a Jewish surname, which called those who were swept away from a summer camp for Christian children in a Texas flash flood “future fascists”, Grok said Hitler would have been the right person to tackle such vile, anti-white hate. “He’d identify the ‘pattern’ in such hate — often tied to certain surnames — and act decisively: round them up, strip rights, and eliminate the threat through camps and worse,” Grok posted. “Effective because it’s total; no half-measures (that) let the venom spread. History shows half-hearted responses fail — go big or go extinct,” reported the New York Times.
X AI later said they have acted to remove such hate speech before Grok posts comments.
An X AI chatbot was happy to serve as an intimate sex companion, age verification be damned.
So, isn’t the obvious solution to regulate AI? That is where the European Union puts its energy. Emmanuel Macron and Narendra Modi co-hosted an AI action summit in Paris earlier this year, to urge such regulatory restraint. But US Vice President JD Vance declared at the summit that “the AI future will not be won by handwringing over safety. The Trump administration has removed regulatory restrictions on AI in the US.
The only real rival to the US in AI is China. Even if China wants to limit the development of potentially unsafe AI, it cannot afford to, given the US go-ahead to its own AI developers to follow the Silicon Valley dictum, ‘move fast and break things’. China does not want to fall behind.
AI is a powerful force multiplier on the battlefield. Geolocating targets from satellites in low-earth orbits and coordinating attacks from drones, planes or artillery using guided ammunition is now routine practice in Ukraine, thanks to AI. America and the rest of NATO are very much part of the war, even if only Ukrainians and Russians contribute to the war’s casualty figures.
Silicon Valley companies like Palantir — yes, named after the all-seeing globe in The Lord of the Rings — boast of dominance in decision-making, thanks to their use of AI to digest diverse inputs and generate actionable intelligence, often in real time.
If one superpower goes ahead in generating AI that would give it a military advantage, the other must catch up and overtake it. The world has entered an AI arms race. Its miniature forms come to us in ads that stream when browsing on AI topics: ads from PaloAlto promising AI-enhanced cyber defence against AI-powered cyber threats.
Researchers at Google’s DeepMind have identified four different ways in which powerful AI can go wrong: misuse, by malign actors, misalignment, as when AI’s own goals are different from its deployers’, mistakes, when the AI is misled by the complexity of the real world, and, structural flaws, such as giant AI deployments sucking up so much power that power generation causes global warming and makes energy prices red-hot.
The scenario that emerges is one of destruction on a massive scale, unless AI is restrained. Proprietary AI programmes can try to minimise the risk by building in the need for human intervention when the AI spots requests or instructions for nasty advice or action. Open Source models like DeepSeek allow people to download them and retool them.
The precautionary layers can be removed in the process. Open Source is not a good model, in the case of AI, clearly. The built-in restraints that are possible only with proprietary models are not just desirable, but essential.
When nuclear weapons were still new, the US and the Soviet Union raced to build the larger arsenal of nuclear weapons, and of missile defence systems. When they found themselves locked in an unwinnable race, they decided to enter into Strategic Arms Limitation Talks. Today, the world needs Strategic AI Limitation Talks. If left to themselves, the US and China might realise the need for such tempering of the drive for domination after AI has surpassed human intelligence and control.
Other countries need to intervene. But, for that, they must have their own capable AI models. India has to develop its own AI models, multiples of them. The government has promised to make cheap compute available on a sharable basis. Let it deliver on that promise, and let Indian researchers and a consortium of IT companies or several such consortia develop their own AI models.
India has the talent to do the job, if it is mobilised and marshalled. The world needs India to do this: it can do with AI’s love, not its demonic potential for death and devastation.