The apex court seems to have given scant regard to the incongruence that results from an intermediary being simultaneously held liable and not liable for user-posts deemed defamatory, depending on the date of the post.
The Supreme Court (SC) ruling that the 2009 amendment of the Information Technology (IT) Act that provides a safe harbour to digital intermediaries wouldn’t apply to a pre-amendment defamation case against Google opens a regulatory Pandora’s box. Visaka Industries, an asbestos sheet manufacturer, had filed a complaint against Google in 2009 before the amendment to the IT Act in October that year. The amendment, among other things, protects internet intermediaries from liability for offences like defamatory content posted by third parties on their platform, under Section 79. Visaka had issued legal notices to Google, asking it to take down a post made on Blogspot (Google’s blogging service) made by the Ban Asbestos Network India that targeted it, claiming that Google had exponentially amplified the reach of the defamatory statements, without taking due care. Google had moved the Andhra High Court, but didn’t get any relief. Now, the SC’s ruling means that the safe harbour provisions in the IT Act won’t be available to cases filed before the October 2009 amendment. The apex court seems to have given scant regard to the incongruence that results from an intermediary being simultaneously held liable and not liable for user-posts deemed defamatory, depending on the date of the post. Such regulatory schizophrenia militates against established principles of justice delivery. And, as important, it affects business confidence in the law of the land.
However, it is not just the SC ruling that is fuelling uncertainty. It is also the fact that the government itself has proposed a set of amendments that strips the protection the intermediaries enjoy. In December 2018, it had invited public comments on the draft Intermediaries Guidelines (Amendment) Rules that called for, among other things, intermediaries to “deploy technology based automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content”. While what could constitute unlawful information or content is defined to an extent in the proposed amendment, qualifiers like “grossly harmful, harassing, blasphemous, defamatory, obscene” that have been used lend themselves to wide interpretation. That apart, proactive identification of unlawful content will mean that the intermediary has to screen content in a manner that could run afoul of the law on privacy—while companies are doing it on the basis of user reports so far, proactive identification could also be heald to mean that certain intermediaries that assure end-to-end encryption to users will have to find a way to decrypt posts. Apart from the ramifications this has for privacy, it will usher in a regime of censorship. Also, the draft rules are quite in line with the draft personal data protection law when it comes to intermediary being required to allow access to user information and provide assistance to the government in investigations. As this newspaper has pointed out, such sweeping powers for the government, without the right checks, would amount to gross violation of privacy. The SC order on intermediary liability, read together with the draft rules’ provisions on proactive screening, respective privacy policies and sharing of data with the government, effectively means Google and other intermediaries are trapped in a devil-deep sea situation.