The latest draft amendments proposed by the ministry of electronics and information technology (MeitY) seek to do two things at once: give legal force to government advisories issued to social media platforms, and expand the regulatory net to user-generated content by effectively recasting users as akin to publishers and news platforms.
Both moves raise concerns that go beyond compliance and point to a stretch in both legislative intent and legal design. At the heart of the proposal is an attempt to make advisories, clarifications, and standard operating procedures binding by linking them to due diligence obligations under the Information Technology Act.
This creates a pathway where failure to comply with an advisory could result in the loss of safe harbour protection. In effect, what has traditionally been non-binding guidance is sought to be elevated into enforceable obligation.
Advisories, by definition, are not law. The parent statute empowers the government to frame rules within defined limits; it does not envisage a regime where executive communications can acquire binding force without legislative backing.
If such a shift is considered necessary, it ought to be anchored in an Act of Parliament rather than introduced through subordinate legislation that expands the scope of the parent law. The stated objective appears to be speed, particularly in responding to deepfakes, misinformation, and harmful content. That concern is legitimate.
But the means chosen remain questionable. Faster response mechanisms, clearer statutory rules, and improved institutional coordination can achieve similar outcomes without blurring the distinction between law and guidance.
Creating ambiguity
Embedding advisories within the legal framework risks creating ambiguity around their scope, frequency, and enforceability, while concentrating discretion within the executive.
Equally significant is the definitional expansion embedded in the draft. By bringing user-generated news and current affairs content within the regulatory ambit, the framework moves towards treating individuals—bloggers, video creators, or even professionals posting on platforms—as a class closer to publishers.
It widens the field of regulation to include ordinary users engaging in commentary, analysis, or dissemination of information online. The implications for speech are difficult to ignore. Even without explicit censorship provisions, the framework creates conditions for indirect restriction.
Platforms, faced with the prospect of liability, are likely to over-comply, leading to quicker and broader takedowns. For users, this translates into a narrower space for expression, particularly on issues that may be politically or socially sensitive.
Recent precedent
There is also recent precedent to consider. The withdrawn Broadcasting Services Bill had proposed to bring online creators within a licensing or pre-certification framework, drawing criticism for potential censorship. The present draft, while different in form, appears to revive elements of that approach through another route—by tightening intermediary obligations and indirectly shaping what users can publish.
The route may have changed, but the underlying impulse appears similar. This is not to suggest that regulation is unwarranted. The existing framework under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, already enables content takedown, grievance redress, and oversight in cases affecting sovereignty, security, and public order.
Additional layers that stretch beyond the statute risk creating legal vulnerability and regulatory uncertainty. The draft has been opened for public consultation until April 14, providing an opportunity for reconsideration. The task is to address emerging digital risks without diluting legal boundaries. Conflating advisories with law and extending regulatory reach to ordinary users risks doing both, and calls for careful course correction.
