New laws on ‘intermediaries’ are open-ended and worrying
(US Capitol violence; Reuters image)
Given the kind of powers Big Tech firms like Twitter have in deciding whose accounts will be suspended or which tweets will be flagged/deleted, or the role of a Parler in the Capitol siege in the US, it is hardly surprising that, after years of discussion, the government has finally come out with its Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules that seek to address some of these issues. Indeed, the refusal of messaging platforms such as WhatsApp, or Blackberry in the past, to help trace certain messages – like say, those between terrorists or those trying to foment communal tension – is also something the new rules try to address. In doing so, however, the government appears to have given itself too many powers; and given how open-ended the definition of the proscribed content are – ranging from being defamatory to threatening ‘public order’ or violating ‘decency’ or ‘morality’ – the chances of abuse cannot be ruled out. Few would, for instance, argue that sedition is not a serious crime but, in the past, people have been arrested under this law for just lampooning politicians.
Asking an ‘intermediary’ to remove content based on court order is one thing since there is a judicial process that has been gone through, but even an order from “the appropriate Government or its agency” is considered good enough under the new laws. Asking intermediaries to appoint compliance and grievance redressal officers is a good move, and the government has done well to say that it does not want firms to disclose the content of messages but just wants them to help identify where the message originated from; though, even if you assume it is technically possible to track a message without opening its contents, this is a provision that can be abused if not used carefully. Asking social media intermediaries, like Facebook or Twitter, to ‘proactively identify’ and block the reposting of a certain kind of content that has been banned before – like, say, the video of the Christchurch mosque shooting in 2019 – is probably a good idea.
In the case of digital media too, asking for a three-ringed grievance redressal process is a good measure. If the first level, that of a complaint to the digital media firm, does not result in a satisfactory resolution, this is to be bumped up to a self-regulating body headed by a retired judge. It is after this that the problem starts since the third level of redressal is that of an inter-departmental government committee and, since digital publishers have been brought under Section 69 of the IT Act, this allows the government to ask for the removal of content even before a judicial process to declare it fake or otherwise damaging.
The government certainly needs to be able to take action where the national security is concerned but, in most cases, it has enough powers to do so. Arming it with new powers for what should be only emergency situations is something that needs to be done very carefully, with enough safeguards to prevent abuse.