The notification of the Digital Personal Data Protection (DPDP) Rules, nearly two years after the enactment of the Act, is a significant step forward in the transition towards modern data governance. For a country with hundreds of millions of digital citizens and a rapidly expanding tech ecosystem, the transition from fragmented policies to a structured, rights-oriented regime has been overdue. It is, therefore, reassuring that the government has chosen clarity over haste. The Rules offer a well-calibrated enforcement road map, drafted in simple, unambiguous language. Equally welcome is the transition period of 12-18 months for industry. This phased roll-out acknowledges that compliance is not an overnight exercise. Companies will need time to re-engineer consent flows, redesign data-retention practices, and overhaul legacy systems.

The core philosophy of the DPDP framework-purpose limitation, data minimisation, and storage limitation-resonates clearly through the Rules. Data collected for one use cannot be arbitrarily repurposed without fresh consent; only the minimum personal information required should be taken; and personal data should not be retained after its purpose has been fulfilled unless required under law. The real test, however, will come once the 18-month window closes and companies translate these ideals into practice. Another sensible aspect of the framework is the government’s measured position on cross-border data flows. Instead of imposing blanket localisation, the default rule permits overseas transfers of personal data, while allowing the Centre to restrict specific jurisdictions or entities if necessary. Predictably, global technology companies would prefer minimal state discretion, but it is also true that no government will fully relinquish the ability to ring-fence sensitive data in the event of public interest, national security, or geopolitical risk.

Much commentary has centred on concerns that the Act grants sweeping powers to the government and may dilute aspects of the Right to Information Act. These debates are not insignificant, but they are not the central issue. The exemptions for the State are not as boundless as critics fear. The more formidable challenge ahead lies in the field of generative AI (GenAI). The recently released report by the ministry of electronics and information technology on AI governance guidelines has acknowledged this tension. The report notes that the DPDP’s principles, especially consent and purpose limitation, do not neatly align with the way modern AI systems operate. It highlights difficult questions such as whether publicly available personal data can be freely used to train AI models, whether notices and consent managers can keep pace in a world of multi-modal AI and ambient computing, and whether the research and legitimate use exceptions might unintentionally enable unregulated training of personal data.

The government’s approach for now is to treat DPDP as the backbone of data governance while evolving flexible consent mechanisms, transparency requirements, and grievance processes for AI systems. It also keeps open the possibility of future amendments if existing laws prove insufficient. This is where the real test of DPDP lies as the world of data is not static. GenAI, new behavioural targeting models, and immersive computing environments are developing faster than policy cycles. The law will need agility, regular revision, and real-time responsiveness if it is to stay effective. The DPDP Act and Rules have laid a strong foundation. Whether they remain fit for purpose will depend on how quickly the regulatory architecture is updated in sync with innovation. In that sense, the notification of the Rules is not a conclusion but the beginning of the real test.