New AI law to be modelled on IT Act

India’s Ministry of Electronics and Information Technology (MeitY) plans to introduce a comprehensive Artificial Intelligence (AI) Act to formalize the regulation of deepfakes and synthetically generated content.

India to Get Full-Fledged AI Law Following Deepfake Rules
India to Get Full-Fledged AI Law Following Deepfake Rules

After formalising rules to identify, label, and regulate deepfakes and other synthetically generated content, the ministry of electronics and information technology (MeitY) will soon bring out a comprehensive legislation on Artificial Intelligence.

Official sources told FE that once public consultation on the draft rules closes on November 6, the government will finalise them. However, to avoid possible legal challenges, a full-fledged AI law, on the lines of the Information Technology (IT) Act, 2000, will follow.

This means a Bill will be introduced in Parliament to address issues related to deepfakes and synthetically generated content.

Currently, the proposed rules are framed under the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which derive their powers from the IT Act. Legal experts point out that since the Act does not specifically deal with AI, any such rules could be challenged in court.

“A law to curb deepfakes or any aspect of AI will be needed, as the rules currently proposed by MeitY can be challenged because their scope goes beyond the primary legislation,” cyber law expert Pavan Duggal told FE. “Rules are secondary in nature and cannot exceed the ambit of the parent law,” he added.

Officials said once an AI Act is in place, it can be expanded through additional rules as technology evolves, just as the IT Act framework allows.

The new provisions mark a major shift in how digital content, created or modified by AI, will be regulated. The government’s concern stems from the rapid rise of tools like deepfake videos, images, and audio, which are increasingly being used for deception and misinformation.

Under the proposed Rule 3(1), any intermediary that allows users to create, modify, or share synthetically generated content must ensure such material carries a visible label or embedded metadata clearly identifying it as artificial. The label must be permanent, non-removable, and prominently displayed, covering at least 10% of the screen for visual content, or audibly stated during the first 10% of an audio clip.

For major social media platforms, classified as significant social media intermediaries (SSMIs) with over 5 million users, additional obligations apply. They will have to seek user declarations at the time of upload on whether the content is AI-generated, and deploy proportionate technical tools to verify such claims. If found to be synthetic, the content must be clearly labelled or carry a visible notice.

Platforms that remove or restrict access to harmful synthetic content, either on user complaints or through internal mechanisms, will continue to enjoy safe harbour protection under Section 79(2) of the IT Act, shielding them from liability for user-generated material.

To prevent overreach, MeitY has clarified that the new obligations will apply only to publicly shared content, not to private or unpublished material.

The definition of information under the IT Rules has also been expanded to include synthetically generated data, ensuring that AI-created misinformation, defamatory content, or impersonations are treated on par with real-world equivalents under the law.

This article was first uploaded on November two, twenty twenty-five, at forty-seven minutes past six in the evening.

/