Can India's Light-Touch AI Regulation Deliver Brilliance Without the Roulette Risk

India's financial regulators forge light-touch AI frameworks balancing rapid innovation against systemic risks

Article related image
Representational Image
Alvarez/Istock.com
Author
By Indra Chourasia

Indra is a Senior Industry Advisor in the BFSI unit at TCS, with three decades of experience in business strategy and IT consulting. He leads CXO advisory, and drives data and AI-led innovations.

January 2, 2026 at 9:08 AM IST

Policymakers are grappling to devise an apt regulatory approach to balance between fostering innovation and ensuring public safety, security and transparency while addressing unintended behaviors of AI systems. The regulatory approaches for AI are characterised by two perspectives: one supports responsible AI under stringent regulations to ensure the stability or integrity of the system, while the other supports an unregulated path for AI innovation, creating vulnerabilities. 

Model drift, algorithmic bias, data leaks, chatbot manipulation, cyber-attacks executed through agentic AI, and deepfakes are no longer hypothetical risks. Notable AI risk incidents, such as Apple Card's algorithmic bias, Cigna's unreviewed claim denials, Chime's fraud detection causing account freezes, and a Hong Kong firm duped by deepfake fraudsters, highlight the need for enhanced regulatory standards to ensure effective AI governance with adequate guardrails. Critically, systemic risks due to concentration from Systemically Important Digital Infrastructure (offering AI models and platforms) require pinpointed accountability of AI value chain operators.

 

AI regulation
Currently, most global jurisdictions, including the UK, Singapore, and Australia, rely on voluntary governance standards and frameworks, while the EU is advancing a comprehensive EU AI Act. In the US, there is no federal AI regulation, but the National Institute of Standards and Technology published an AI Risk Management framework for voluntary use, and a few states have enacted state-level laws. The recent presidential executive order aims to establish a uniform federal policy and disclosure standards for AI, potentially serving as a legislative template for others.

As India embarks on the AI trajectory, it lacks comprehensive legislation regulating the entire gamut of AI, with existing laws like the Digital Personal Data Protection Act and Information Technology Act only addressing limited aspects of data privacy, digital commerce, and cybercrimes. Guided by the National Strategy for Artificial Intelligence of 2018, the Niti Aayog 2021 approach papers emphasised establishing principles for responsible AI, focusing on ethical considerations during the AI implementation. In November 2025, MEITY published India AI Governance guidelines, which set a voluntary light-touch framework, addressing the dimensions of innovation enablement as well as risk mitigation.

AI in Indian financial sector
Towards evaluation of the use of AI/ML technologies in securities markets, the SEBI’s May 2019 circular mandated mutual funds, market intermediaries, and market infrastructure institutions to report their AI/ML applications. In November 2024, it proposed amendments to delineate responsibilities for AI usage by MIIs and other regulated entities. More recently, its June 2025 consultation paper outlines guiding principles for responsible usage of AI/ML, focusing on optimising benefits and reducing risks of AI/ML applications. 

In 2018, the IRDAI established a working group that recommended a regulatory and supervisory framework for InsurTech, focused on risk assessment, product design, and pricing aspects, while considering innovations driven by technologies such as AI/ML, IoT, big data, and blockchain. It emphasised the need for a reframed regulatory approach to effectively manage new risks and business models shaped by technological innovations.

In August 2025, the Reserve Bank of India constituted committee released the ‘Framework for Responsible and Ethical Enablement of Artificial Intelligence’  report with recommended regulatory frameworks on AI adoption in the financial sector. The committee formulated 7 sutras to guide AI adoption along with six strategic pillars, namely infrastructure, policy and regulatory architecture, capacity building, governance structure, protection and safeguards, and oversight of AI systems.

Light-touch Regulation
In a rapidly changing AI ecosystem, light-touch regulation does not mean lowering of guards or compromising safety. Without stifling innovation, it requires a non-prescriptive way to promote innovation-driven growth and competitiveness. This approach requires an attentive outlook and agile regulatory frameworks to build forward-looking capabilities for advancing the innovation agenda in the financial sector. To bridge the gaps, the regulatory efforts should focus on enhancing skills and capabilities to enable innovation at scale. With an aim for a higher maturity stage of AI adoption, key priorities to strengthen foundational blocks of light-touch regulation include:

  • Focused AI capacity-building initiatives in the form of industry-centric education, skilling and training programs, and academic partnerships for inclusive talent development as well as facilitation of sandbox and compute infrastructure to nurture entrepreneurial drives.
  • Regulatory gap analysis covering current regulations, rules, and guidelines to identify regulatory gaps factoring integration of AI systems across business processes with requisite amendments.
  • Facilitation of contextual industry standards in the form of voluntary practice codes and interoperable technical standards in consultation with the industry participants without unduly imposing compliance burdens.
  • Evaluation of industry-level risks potentially affecting individuals and larger public interests and formulation of mitigation measures and contingency procedures, including regular testing and calibrations to avoid any contagion risks or systemic disruptions.
  • Establishing a common reporting mechanism for transparent disclosure of incidents observed with actual or potential harm or malfunction attributed to AI for better information awareness, lessons learned, and conformity assessments.
  • A practice model of enterprise AI risk governance designed to guide industry participants for their readiness of board-level oversight mechanism. It should include policies, procedures, risk management strategies, contingency plans, training programs, data governance, technical documentation, audits, and adequate human oversight during the lifecycle of AI systems.
  • Enabling thematic AI experimentation through dedicated regulatory sandboxes mirroring global best practices.

The UK Financial Conduct Authority, the Monetary Authority of Singapore, and the European Forum for Innovation Facilitators, facilitate curated use cases, best practices, skills training, and live experimentation with AI products and privacy frameworks. Unlike global regulators, the domain-specific sandbox or Inter-operable Regulatory Sandbox established by the RBI, SEBI, IRDA, PFRDA, and IFSCA have not yet enabled a thematic AI innovation environment. Thus, it also limits a dynamic, evidence-based approach to AI regulation and policymaking.

Driven by a long-term outlook and a vigilant stance, the light-touch framework goes beyond prioritising the operational realm of compliance to a collaborative framework that strategically guides the industry’s growth along the AI trajectory.