The government’s amended Information Technology (IT) Rules, may face legal and enforcement challenges despite sharply tighter takedown timelines for synthetically generated and unlawful content, according to legal experts, who said the absence of fresh penal provisions could limit immediate remedies for users.
The amendments, which were notified on Tuesday by the Ministry of Electronics and Information Technology and will be effective from February 20, primarily tighten compliance deadlines for intermediaries. However, legal experts pointed out that the only substantive consequence for non-compliance remains the potential loss of safe harbour protection under Section 79 of the IT Act.
“In practical terms, there is no standalone penal provision introduced in these amendments. The sanction is loss of safe harbour, and in 25 years no major platform has actually lost that protection,” cyber law expert Pavan Duggal told Fe. “If there is no immediate penal trigger, users may not get an effective and time-bound remedy,” he added.
Sanction Gap
Under the new rules, platforms must remove non-consensual intimate imagery and deepfake content within two hours of receiving a complaint, down from the earlier 24-hour window. Separately, other unlawful content must be taken down within three hours of receiving a government or court order, compared with the previous 36-hour limit.
Legal experts said the key test will be implementation. “What happens if a platform does not comply within three hours and there is correspondence or clarification sought? The rules prescribe timelines, but enforcement can involve back-and-forth,” a lawyer said. Experts cited a recent instance involving the AI chatbot Grok, where MeitY had to send a notice and later followed up, as an example of how compliance processes can extend beyond initial directions.
Concurring with this viewpoint, Rakesh Maheshwari, a former MeitY official said that platforms may continue with their delaying tactics, but by setting strong benchmarks, the government has shown intent, which will ensure with time that measures are taken in a time-bound manner.
From Labels to Metadata
The amendments also introduce a labelling framework for AI-generated content. Intermediaries must ensure that synthetically generated audio, visual or video content that appears authentic is clearly labelled. The government has, however, dropped the earlier draft proposal that mandated fixed-size watermarks covering at least 10% of visual display area or an audible marker in the first 10% of audio duration.
Instead, the rules require platforms, where technically feasible, to embed permanent metadata or unique identifiers into such content to enable traceability. Legal experts said the obligation is broad but lacks technical clarity. “The rules mandate embedding metadata, but they do not spell out standards for how such metadata will be generated, verified or audited. That gap could create compliance uncertainty,” one expert said.
Beyond enforcement mechanics, Duggal raised concerns about legislative scope. The amendments are framed under the IT Act, a statute enacted well before the rise of generative AI. “Synthetic content and AI-related harms are distinct regulatory questions. Ideally, there should be a comprehensive law on AI regulation, which should be passed by Parliament, rather than through secondary rules under an older law as is the case at present,” he said.
“If platforms believe the delegated legislation exceeds the parent statute, they can challenge it in court when enforceability arises,” Duggal said.
According to Maheshwari, there’s certainly scope for the Rules to be challenged, but then the government always has room to come out with a primary legislation if the need is felt.
