The Internet and Mobile Association of India (IAMAI), in its response to the government’s draft amendment on artificial intelligence (AI) labelling, has argued that the new obligations around “synthetically generated information” (SGI) risk over-regulation and duplication of existing laws.
In its submission to the ministry of electronics & information technology (MeitY) highlighting its concerns, IAMAI said the proposed definition of SGI under Rule 2(1) is broad enough that it could cover routine, harmless digital edits. The draft defines SGI as any information created or altered using a computer or algorithm in a way that appears real.
IAMAI warned that this could “unintentionally sweep in benign content and everyday user activity”, creating legal uncertainty for platforms. Put simply, the current definition is so wide that even normal, good-faith edits like fixing a photo’s brightness, cleaning up audio, using auto-correct, applying filters or making routine visual tweaks could be treated as ‘synthetic’. Everyday content people create or modify on their phones and laptops may get wrongly clubbed with deepfakes or harmful manipulated media, IAMAI said.
The body also said that adding a separate SGI clause could duplicate obligations and lead to inconsistent enforcement across platforms. “Introducing Rule 2(1A) to ‘clarify’ that references to ‘information’ include SGI in contexts of unlawful acts risks redundancy and uncertainty. The IT Rules 2021 already apply to ‘information’ agnostic to how it was created. Adding a separate cross-cutting clause for SGI could be read to expand or duplicate obligations, complicate interpretation of due diligence standards, and increase the risk of inconsistent enforcement across intermediaries,” it said.
The industry body has also opposed the draft’s mandatory labelling rules, which require watermarks covering at least 10% of an image and audio disclosures in the first 10% of a clip. These measures, it said, are technologically immature, can harm user experience and are easy to bypass. IAMAI argued that India should wait for internationally harmonised provenance standards rather than mandate watermarking prematurely.
Another critical concern is the requirement for social media intermediaries (SSMIs) to verify user declarations before uploading SGI. According to IAMAI, this would force platforms into pre-screening every piece of content contradicting the Supreme Court’s(SC) Shreya Singhal v. Union of India judgment, a landmark act in which the SC struck down Section 66A of the IT act, holding it unconstitutional for violating the right to freedom of speech.
Additionally, the association said that the draft blurs the line between intermediaries and platforms that generate content themselves, risking fresh liabilities for companies using AI tools internally. Since impersonation, fraud and harmful deepfakes are already addressed under existing IT provisions, IAMAI argued that creating parallel SGI rules may complicate compliance.
The draft amendments release by MeitY last month marks the first time that “synthetically generated information” has been given a legal definition under India’s IT law, a major shift in how digital content created or modified by AI will be regulated. The government’s reasoning for the same has been that deepfake audio, videos and images created using generative AI have rapidly evolved as powerful tools of deception.
Under the proposed rules, platforms are required to label synthetic content, embed metadata, and seek user declarations.
Instead, IAMAI urged MeitY to take a “technology-neutral, standards-led approach”. It recommended removing the SGI definition and mandatory labelling requirements from the current draft until global standards mature.
