Some big advertisers recently pulled out of the platform over its handling of hateful content in the context of race tensions in the US.
The political blame-game over the Wall Street Journal (WSJ) article that claims that a top Facebook India executive batted against action on hate-mongering posts by members of the ruling BJP misses the point. The oppositions parties that have used the report to accuse the BJP of using social media to subvert democracy need to remember that their supporters and leaders have been caught quite short on this matter—indeed, former Congress chief Rahul Gandhi had to recant his attribution on the basis of a smear campaign targeted at the prime minister to the Supreme Court, but this was long after social media had been used to amplify his allegations. And, the ruling party shouldn’t be claiming, as former I&B MoS Rajyavardhan Singh Rathore does in an opinion piece in The Indian Express, that the platform is a Left-Congress leaning platform—indeed, last year, of the 700 Facebook pages taken down by the social media in India for “coordinated inauthentic behaviour”, 687 were Congress-linked while 15 were linked to Silver Touch, the IT company that developed the NaMo app. However, the central issue in the present matter is neither group’s contentions, but the social media giant’s conduct.
While Facebook says it prohibits the use of its platform for hate-speech and promotion of violence “without regard to anyone’s political position or party affiliation”, a top executive attempting to stall action against a political leader’s hate-filled posts because acting against a member of the ruling party “will damage the company’s business prospects in India” doesn’t seem to be quite in keeping the claim. Indeed, against the backdrop of the phenomenon known as “WhatsApp university”—use of the messaging service to spread misinformation, content that could incite violence, etc—social media such as Facebook are required to overtly signal that they are above partisanship. However, little in Facebook’s history of policing political/polarising content would seem to inspire faith.
From the US lawmakers summoning it, along with other social media concerns, over attempts to silence conservative voices in the US to the hearings in the UK and Singapore over the Cambridge Analytica episode, Facebook doesn’t seem to have gotten this right. What’s worse, the company seems to be aware of the problem of harmful content on its platform; in 2018, the company had internally looked at how the platform was facilitating polarisation, but, as per WSJ, did little to curb this, with policy chief Joel Kaplan arguing at the time that any attempt to guide civil conversation on the platform would be “paternalistic”. Some big advertisers recently pulled out of the platform over its handling of hateful content in the context of race tensions in the US. But, Facebook still seems unable to strike a balance between its responsibility as an amplifier of views and the need for government liaison. So, even as it claims to champion free speech, in Singapore, it allows appending of “correction notices” to news deemed false by the government, and even has agreed to restrict access to dissident political content in Vietnam.
There can be no allowing a relaxed standard on hateful content based on political affiliation. Congress parliamentarian Shashi Tharoor’s call for a joint parliamentary committee hearing on the present matter is perhaps one way to take a deep dive into the problem and look for a consensus solution, however hard it might be to devise and implement.