The ministry of electronics and information technology (MeitY) on Wednesday laid out a clear framework to identify, label, and regulate deepfakes and other synthetically generated information. The same has been done by proposing a fresh set of amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.

The proposal marks the first time that synthetically generated information has been given a legal definition under IT law, signalling a major shift in how digital content will be treated when created or modified by artificial intelligence.

The government’s reasoning is straightforward. Deepfake audio, videos, and images, created using generative AI, have rapidly evolved as powerful tools of deception.

“The draft seeks to help users distinguish between synthetic and authentic content while holding social media platforms accountable,” Electronics and IT Minister, Ashwini Vaishnaw said while speaking to the media. “People using prominent persons’ faces and creating deepfakes is a growing menace. The step that we have taken ensures that the user gets to know whether something is synthetic or authentic. That distinction will be made through mandatory labelling and metadata,” Vaishnaw added.

The proposals outline a clear operational system. Any intermediary, such as a social media platform or an app, that enables users to create, modify, or distribute synthetically generated content will now be obligated to ensure that such content carries a visible label or embedded metadata identifying it as artificial. This label, under the newly proposed Rule 3(1), must be permanent, non-removable, and prominently displayed. For visual content, it should cover at least 10% of the screen surface area; for audio, it must play audibly during the first 10% of the clip’s duration. In other words, users scrolling through a video feed or listening to a clip will immediately know whether what they are consuming is authentic or synthetically generated.

For platforms that merely host user-generated content, such as Facebook, Instagram, YouTube, or X, the obligations go even further. Classified as significant social media intermediaries (SSMI) – which have more than 5 million registered users in the country — under the IT Rules, these platforms will have to obtain a user declaration at the time of upload, asking whether the content is synthetically generated. They will also be required to deploy reasonable and proportionate technical tools to verify these claims. If the content turns out to be AI-generated, the platform must ensure that it is clearly labelled or accompanied by a visible notice.

Platforms which remove or restrict access to harmful synthetic material based on user grievances or internal detection, will continue to enjoy the safe harbour protections under Section 79(2) of the IT Act. This protection ensures that platforms are not unfairly penalised for acting responsibly to curb harmful or misleading content.

To avoid overreach, Meity has clarified that these obligations will apply only to content that is publicly shared or published on social media platforms, not to private or unpublished material.

The rules also make it clear that the definition of information in the IT Rules now explicitly includes synthetically generated data, so AI-created misinformation, defamatory content, or fraudulent impersonations will be treated no differently from their real-world counterparts under the law.

The ministry has opened the draft for public consultation, inviting feedback from stakeholders, industry players, and citizens until November 6. 

Only senior officers can now order content takedowns

The ministry of electronics and information technology (Meity) has amended the Information Technology Rules, 2021, to bring more accountability and structure to the process of ordering removal of online content. From November 15, only senior government officers like those at the level of joint secretary or above, or in the case of law enforcement agencies, a deputy inspector general (DIG) of police or higher, will be empowered to issue take down notices to social media platforms under Section 79(3)(b) of the IT Act.

This is a significant shift from the current practice where relatively junior officials, including sub-inspectors and assistant sub-inspectors in state police departments, could direct platforms to remove content. The new rule, inserted through amendments to Rule 3(1)(d) of the IT Rules, 2021, explicitly limits that power to senior ranks. In cases where no joint secretary is posted, a director-level officer or an equivalent can act as the authorised signatory.

Meity said the intent is to ensure that decisions to take down content online are made with a higher degree of responsibility and scrutiny. “We have raised the level of accountability in the government,” Electronics and IT Minister, Ashwini Vaishnaw said, adding that the reform balances regulatory authority with due process.

The amendment also formalises procedural safeguards. Every take down notice must now spell out the precise legal provision invoked, describe the nature of the alleged unlawful act, and identify the exact URL or online location of the content in question. Non-compliance with such a notice could still cost an intermediary its safe harbour protection under Section 79, which provides immunity to the platforms from liability for user-generated content.

Crucially, Meity has also introduced a periodic review mechanism. Orders issued under this provision will be reviewed by a committee chaired by an officer not below the rank of secretary, such as a state’s IT or home secretary, to assess whether they were necessary and proportionate.

The move comes against the backdrop of a legal challenge from X which had moved the Karnataka High Court questioning the sweeping powers of the government to demand content removal. However, the court ruled against the company.