A high-powered government committee on Artificial Intelligence (AI) has determined that India does not require a separate law to regulate AI technology at the moment. The panel came to a conclusion that a large number of risks associated with AI’s leap can be effectively handled by existing legislative frameworks – there is no need to pen down separate laws.
The committee’s recommendations were documented in the ‘India AI Governance Guidelines’, announced by top government officials, including Principal Scientific Adviser Ajay Kumar Sood and IT Secretary S. Krishnan. The guidelines explicitly stated that “existing laws (for example on IT, data protection, consumer protection and statutory civil and criminal codes, etc.) can be used to govern AI applications. Therefore, at this stage, a separate law to regulate AI is not needed, given the current assessment of risks.”
The decision comes at a time when major AI players from the global stage, like OpenAI, Perplexity, Google and Meta are doubling down on their generative AI efforts for Indian users.
India doesn’t need separate laws for AI
India’s approach is “to govern applications of AI by empowering the relevant sectoral regulators, and not to regulate the underlying technology itself.” The primary goal, according to the committee, is to encourage innovation and AI adoption while simultaneously protecting individuals and society from the risk of harm.
Although the government is not setting up new legislation, the IT Secretary, S. Krishnan, clarified that India is ready to act if circumstances change.
For the time being, the committee is advocating for the adoption of balanced, agile and flexible frameworks that support innovation while minimising risks. To achieve this, the official guidelines propose several key steps:
India-specific risk assessment: Create a framework based on empirical evidence of harm in the Indian context.
Voluntary industry measures: Encourage the industry to adopt voluntary measures regarding privacy and security.
Grievance redressal mechanism: Establish a system for reporting AI-related harms and ensuring resolution within a reasonable timeframe.
Review and amendment of current laws: Review current laws, identify regulatory gaps in relation to AI systems, and address them with targeted amendments.
Transparency reports: Industry should publish transparency reports that evaluate the risk of harm to individuals and society in the Indian context, to be shared confidentially with relevant regulators if they contain sensitive information.
Graded liability system: Implement a system for accountability based on the function performed by the AI, the level of risk, and whether due diligence was observed.
The committee stressed that “timely and consistent enforcement of applicable laws is required to build trust and mitigate harm,” highlighting the inherent risks to society and citizens from the onslaught of the new technology.
