India’s Digital Personal Data Protection (DPDP) Act, 2023 and the operational DPDP Rules 2025 may not call themselves AI laws, but they shape how artificial intelligence functions in the country, to some extent. By defining ‘processing’ to include automated operations, the law essentially brings under its umbrella most AI systems.
This basically means that the AI models that use personal data must now follow certain principles when it comes to consent, purpose limitation and data minimisation. In layman’s terms, data minimisation means that the company can no longer collect excessive personal data, but can only collect what is strictly necessary for a specific and defined purpose.
While the law does not explicitly spell out algorithmic transparency, which is basically the ability to understand and explain how an AI model arrives at a particular decision, the rules surrounding it push companies towards adopting a more responsible AI use.
For companies that are classified as Significant Data Fiduciaries or SDFs, the bar is set higher. These companies are required to conduct annual Data Protection Impact Assessments that will examine the risks associated with automated decision-making and profiling. This makes AI governance less about unchecked innovation and more about ethical data handling and accountability.
Where clarity ends, and ambiguity begins
Industry experts broadly agree that DPDP introduces long-awaited structure, but its silence on several AI-specific realities remains a concern. “The Act provides measures that support informed decision-making for AI development by setting ground rules for data processing and proposing obligations such as consent, data minimisation, purpose limitation, and storage limitation. However, the Act does not provide clarity on the use of data voluntarily made available in public, nor does it explain how organisations can obtain consent retroactively to continue using such data for training AI models,” Mayuran Palanisamy, Partner, Deloitte India, told financialexpress.com.
He pointed out that large AI systems depend heavily on vast volumes of data available on the internet. While DPDP is built on informed and specific consent, it remains unclear how companies will realistically obtain permission for datasets that already run into millions of data points.
A consent-first philosophy, unlike the EU
India’s path also diverges from global approaches to AI regulation. “India’s approach is built on a ‘data principle first’ foundation, prioritising personal data protection, unlike the EU, which has adopted a ‘risk first’ AI-specific law. India’s Act focuses on how data is handled by organisations and how to provide more control to data principals, while the EU AI Act classifies systems by risk.” Palaniswamy noted.
This means India’s framework focuses less on categorising AI by danger level and more on ensuring that user data, which fuels AI, is handled carefully and responsibly.
The problem with what’s already out there
A major grey area lies in the treatment of publicly available data. “As per Section 3(c)(ii), the provisions of the DPDP Act do not apply to ‘personal data that is made or caused to be made publicly available by the Data Principal to whom it relates’… this means publicly available personal data can still be scraped and used for model training without legal consequence,” an SFLC.in spokesperson said.
This opens up difficult questions: Does public data automatically become free for AI use? And what happens when that data is uploaded by third parties?
Sanjay Trehan, digital media advisor, argues the issue will evolve. “This is a very complex issue, and more clarity will emerge over time about DPDP’s role in creating responsible AI… How it navigates the web of smart algorithms and hyper-personalisation models and balances individual rights would be the space to watch out for,” he said.
He also raises deeper concerns about the long-term impact of AI systems that have already absorbed massive volumes of data: “How do you leverage it without crossing the line? How do you erase data which is deeply embedded in large models? How do you minimise algorithmic bias? How do make the machine to unlearn?”
For AI companies on the ground, DPDP is already influencing operational strategy. “For AI developers, this means clearer expectations around issues like consent… retention limits… and breach transparency.” Abhishek Razdan, Co-founder & CEO, Avtr Meta Labs, said.
Not an AI governance law
“DPDP is still a data-protection law — not an AI-governance law. It doesn’t yet address how large AI models should treat web-scraped data, how training datasets should be documented or audited, or how to classify high-risk AI systems,” Razdan added.
“The Digital Personal Data Protection (DPDP) Act provides greater clarity for AI companies in India by establishing clear guidelines for data usage and processing… certain elements, particularly related to AI-specific regulations, might still require further clarity as the law evolves,” Apurv Agrawal, Co-founder & CEO, SquadStack.ai, commented.
“We don’t scrape data at all… All data we process is collected with a clear purpose and governed by strict compliance, privacy controls, and certification-grade security practices, fully aligned with DPDP,” he added.
While DPDP represents a significant step forward in regulating how personal data is handled, it leaves major AI-specific questions unresolved, from how training data should be sourced and audited to how bias and explainability should be managed.
As Trehan puts it, “Clearly, DPDP is a significant step up in terms of data protection and maintenance of data integrity, but its application in the AI world is fraught with challenges, both technical and managing privacy violation risks.”
