By Pavan Duggal

Indian cyber legislation has reached a pivotal moment. The draft changes to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, referred to simply as the IT Rules, represent a definitive legal answer to one of the most pressing issues facing our digital society: the emergence of deepfakes and artificially created content.

Indian law now officially recognises “synthetically generated information” for the first time. This is material that has been modified or produced using artificial intelligence (AI) to look genuine. This acknowledgment goes beyond mere terminology. It confirms that legal frameworks must adapt to safeguard truth in an era where AI-generated fabrications can convincingly mimic reality.

Anchoring digital truth in law

We are living in a digital world where simply seeing, hearing, or reading something online does not necessarily mean it is authentic. Deepfakes, cloned voices, and synthetic media corrupt public conversation damage reputations and undermine democratic confidence. In this context, the new IT Rules Amendments 2025 demonstrate a forward-thinking legislation, bringing India’s digital regulation in line with the challenges of synthetic deception.

What makes these proposed amendments particularly strong is their precision. They provide an explicit definition of synthetically generated content and impose a legal obligation on intermediaries to ensure such material is properly labelled. This is not just administrative red tape. It represents India’s legal acknowledgment that transparency is both an ethical imperative and a regulatory requirement. When content has been digitally manipulated, users have a fundamental right to be informed.

Due diligence and burden of transparency

The revised framework requires platforms and intermediaries that host or facilitate synthetic media creation to label this content clearly. Every deepfake or artificially generated piece must include permanent metadata or markers identifying it as synthetic, with a visible indicator occupying at least 10% of the content’s viewing or listening area.

This creates an entirely different type of compliance obligation, one that demands not just proper behaviour, but also moral responsibility. By making truth a built-in feature, these amendments bring accountability back to the fundamental structure of digital space. The objective is straightforward: to help users differentiate between genuine content and computer-generated material.

A triad of platform obligations

For large social media platforms, the amendments establish three interconnected requirements that together foster a culture of openness.

User disclosure: Those uploading AI-created content must declare its synthetic nature.
Platform verification: The intermediary must use technical tools to confirm this declaration and identify hidden synthetic content.
Mandatory labelling: After determining content is synthetic, the platform must apply clear, prominent labelling that eliminates any doubt about the content’s origins.

This measured strategy introduces a new framework for digital accountability, one that combines user transparency with platform responsibility. It transforms synthetic content governance from reactive moderation to proactive management.
Safe harbour and paradigm of liability.

The most significant change may be the modification of Section 79 of the Information Technology Act, which grants intermediaries protection from liability for user-posted content. According to the proposed amendments, an intermediary that knowingly allows or does not take action against improperly labelled synthetic content could lose this liability protection.
This sends a clear signal to digital platforms: following the rules is no longer a choice. Claiming ignorance will not work as a defence. The law requires alertness, transparency, and active monitoring. Tomorrow’s intermediary will not simply be a neutral conduit. It will serve as a guardian of digital genuineness.

Proactive lawmaking in a reactive world

Historically, legislation has trailed behind technology, arriving only after harm has occurred. The 2025 draft IT Rules Amendments flip this pattern. They prepare for disruption rather than simply responding to it. They acknowledge that AI is not merely an instrument of advancement. It can equally become a tool of deception without proper regulation.
Still, the real challenge will be in putting these rules into practice. Policymakers and regulators will need to tackle four major issues: Developing technical systems for automated identification of unlabelled synthetic content; setting up uniform metadata guidelines to ensure consistency across all platforms; developing enforcement capabilities at scale within India’s regulatory bodies; and promoting international cooperation, given that synthetic content crosses borders effortlessly.

Toward a transparent digital republic

On a more fundamental level, these amendments reflect a shared goal. They aim not simply to control technology but also to establish truth as a core principle of the digital republic. By treating synthetic deception as a legal violation, India establishes a global example for how democratic nations can legislate integrity into their digital landscape.

India’s commitment to addressing deepfakes through legislation is not merely about enforcement. It is about values. It affirms that the drive for innovation should not compromise trust. In the ongoing battle between genuine creation and manipulation, the law must defend authenticity.

The 2025 draft IT Rules Amendments therefore extend well beyond standard regulation. They express India’s commitment to grounding digital freedom in accountability, and digital advancement in truth. When implemented effectively, these reforms have the potential to turn the internet from a breeding ground for deception into an environment where truth, clearly marked and visible, continues to hold significance.

The writer is an advocate at Supreme Court, and Chairman, International Commission on Cyber Security Law

Disclaimer: Views expressed are personal and do not reflect the official position or policy of FinancialExpress.com. Reproducing this content without permission is prohibited.

Read Next