The government’s move to invite public consultation on its proposals to regulate deepfakes by putting in place a legal framework is a right step at the right time. For too long, the menace of synthetically generated content has grown unchecked, warping trust in digital spaces and exploiting the openness of social media.

The recent spate of manipulated videos targeting senior ministers only highlights the urgency of the problem. Most of these are not mischievous pranks but calculated digital forgeries designed to deceive citizens and defraud them. Finance Minister Nirmala Sitharaman has warned that we are entering an era of hacking trust, clearly highlighting the rampant misuse to which technology can be put.

Therefore, the move to formally define and regulate synthetic content under the Information Technology Rules, 2021, is a crucial step in curbing such misuse. By proposing mandatory labelling and metadata for artificial intelligence (AI)-generated media, the government has made clear its intent to ensure transparency without stifling innovation.

Users must be able to distinguish the real from the artificial at a glance, and platforms must be held accountable for facilitating that clarity. There have been suggestions that instead of labelling fakes, a better approach would be to certify authentic content, thereby reducing the volume of content which continuously needs to be monitored.

However, in the Indian context, where the digital user base is vast and the potential damage of a viral deepfake is catastrophic, the government’s approach seems pragmatic. Whether one authenticates the real or flags the fake, the end goal is the same: to prevent deception from masquerading as truth. These nuances can be debated and refined during the consultation process.

Equally important is the government’s parallel move to tighten the framework for ordering the takedown of online content. By reserving this power for senior officials—joint secretaries and above or deputy inspector general and above in the police—the government has injected a level of accountability that was missing earlier. The provision for a review mechanism by secretary-rank officers further ensures that takedown orders are not arbitrary but proportionate.

These changes came after the Karnataka High Court upheld the government’s authority in such matters, which makes the timing significant. Rather than using the court’s endorsement to push through heavy-handed measures, the government has opted to refine and formalise procedures.

To be sure, the problem is not something the government can solve alone. Deepfakes represent a technological frontier that evolves faster than any regulatory response. The tools that create synthetic content are increasingly accessible, while the platforms that host them are global and often slow to act.

In such an environment, enforcement will always be chasing innovation. This is why regulation must go hand in hand with awareness and shared responsibility. But labelling is not a silver bullet and should be seen as one tool among many in the fight against misinformation and manipulation.

Platforms must strengthen their detection systems, but users too must learn to question what they see and share. Technology’s potential shouldn’t be stifled by fear, nor its misuse ignored in the name of progress. The government’s proposals strike this balance. In the end, what will determine success is not the number of rules written but the vigilance with which they are enforced and observed by every stakeholder on the digital frontier, including ordinary users who are the ultimate targets of these scams.