Information technology minister Ravi Shankar Prasad’s letter to Facebook CEO Mark Zuckerberg probably helped refocus the narrative before Facebook India’s chief appeared before the Parliamentary Standing Committee on Information Technology headed by senior Congress leader Shashi Tharoor. Till then, following The Wall Street Journal exposes on Facebook’s India operations, the narrative was about how the firm’s top public-policy executive had ensured that BJP politicians were allowed to remain on the platform despite their posts being flagged as hate-speech; the executive said that removing them could damage Facebook’s operations in the country.
Prasad’s letter sought to turn this narrative on its head by protesting against Facebook deleting pages just before the 2019 general elections; the action, he said, “are seemingly a direct outcome of the dominant political beliefs of individuals in your Facebook India team” and he went on to talk of how “it is problematic when Facebook employees are on record abusing the Prime Minister and senior Cabinet Ministers of India while still working in Facebook India”. At the parliamentary panel session, BJP members asked the Facebook India head about his association with Congress Kerala unit during 2011 assembly elections; BJP MPs also alleged that those who had worked with the Congress party worked in organisations that Facebook used for fact-checking.
The political one-upmanship apart, there is enough evidence globally that social media firms like Facebook are no longer the innocent ‘platforms’ that they once claimed to be—just like anyone can say anything in the public square, they argued, anyone can post anything on our platform, we have no control over it.
The huge sums of money these firms make from platforms, it goes without saying, makes them responsible for what is posted. That is why they have developed sophisticated algorithms for detecting—and then taking down—various types of hate-speech or content, such as paedophilia. Indeed, they even have systems to report hate-speech if their own systems fail to detect them. When these firms are increasingly removing speech on their own—recall how Twitter flagged a few of the US President Donald Trump’s tweets—it is clear they can take action when they are forced to. It is precisely for this reason that, a few days ago, Facebook told users that it could take down any content that it felt could increase regulatory or legal risks for it around the world. And, as the Cambridge Analytica episode showed, Facebook was used to influence US elections as well.
A few obvious conclusions follow, more so given the role of social media in the propagation of fake news. For one, social media must, increasingly, be held legally responsible for what is on its platform, in much the same way traditional media is. There is obviously a difference in that a Facebook cannot prevent posts from being made while a newspaper editor can prevent a story from being published, but what matters is the action taken after the posting is done. In the case of encrypted messages such as those on WhatsApp, the firm cannot even see the messages afterwards; in such a case, it is critical to develop rapid fact-checking methods. It is equally critical to take credible action when it is proved the posts were mala fide; this could include handing over details to law-enforcement agencies after courts have ratified their demands. Allowing social media to flourish without any means to check its ill-effects—such as manipulating elections and spreading falsehoods—is asking for trouble; and it is equally true that, left to its own, social media is not going to take tough action.