Even as the government’s proposal to mandate labelling of AI-generated content has drawn broad approval for its intent, analysts are not sure whether such a system can truly be foolproof. Industry analysts and legal experts Fe spoke to said that the idea of signalling what’s real and what’s synthetic is timely, but were uncertain if current technology, or the law, would be able to deliver what the draft rules expect.

The amendments to the IT Rules 2021, issued by the ministry of electronics and IT on Wednesday, mandate social media platforms to obtain user declarations on whether uploads are synthetically generated, deploy automated verification tools, and visibly label such content before publication. The labels are meant to cover at least 10% of the visual frame or the first 10% of audio duration, to ensure that viewers know when they are watching or hearing something created by AI.

“The idea of automated verification sounds reassuring, but current AI-detection systems simply aren’t reliable enough,” Sindhuja Kashyap, partner at King Stubb & Kasiva, told Fe. Detection tools often hover around 60–80% accuracy, meaning even the biggest platforms could fail compliance tests. Smaller intermediaries, she added, might find the requirement nearly impossible to meet.

There’s also the question of liability. The draft places the burden squarely on intermediaries, who could lose their safe harbour protections under the IT Act if they fail to detect or label synthetic content. With billions of uploads every month, errors are inevitable, experts said.

Raja Lahiri, partner, Grant Thornton Bharat, said that there is an increasing trend in AI deepfake attacks that cloned someone’s voice, appearance or otherwise and given the 1 billion Internet users in India, this is concerning and could have widespread ramifications. Therefore, according to him the proposed changes are welcome and much needed for India and would require proper due diligence by the social media companies on its content.

“Social media firms will have to initiate appropriate processes and controls to monitor deepfakes and ensure better protection to Indian citizens. In my view, the proposed amendments to the IT Rules provides trust and safety for Internet users in India, and ensures navigating the risks of deep-fakes and AU generated content which could pose risks to Indian citizens,” Lahiri said.

Analysts have suggested that instead of a blanket mandate, a risk-based approach, focusing on high-impact synthetic content, may be more realistic.