Twitter plan needs nuancing, others should follow suit.
Beginning March, Twitter will label and, in some cases, remove, doctored or manipulated videos, photographs, and other media shared on the platform. The attempt is to fight fake news and misinformation that can have dangerous real-world consequences. While the economic costs of fake news and misinformation, as per the findings of a study conducted by the University of Baltimore and CHEQ (a cybersecurity company), is a staggering $78 billion a year, the global spend on creating fake news with a bearing on politics stands at $400 million.
This is of particular significance as misinformation colours electoral decisions, which, in turn, affect policies. Twitter’s policy is forward-looking given it targets deep-fakes, or AI/advanced software generated or edited media that distorts a person’s appearance and speech while making it all look authentic. The social media platform will also go after content that is substantially edited with the aim of spreading misinformation. It will decide on a case-by-case basis whether it will remove content that may “impact public safety or cause serious harm”, or if it will let the content remain on its platform with just a label letting users know that it has been doctored.
The policy, of course, will need to be nuanced. Memes, satire, and parodies are a part of the online conversation on politics, and if Twitter’s policy means that such content, too, will face a ban, then it would have created a problem that is just as bad as the original one. Also, relying on AI isn’t as foolproof as it sounds, and it is critical to have human editors to supervise the process; YouTube, for instance, has 10,000 people engaged in spotting material that needs to be removed. Doing such censoring, of course, is not possible on messaging platforms that are encrypted—such as WhatsApp or Telegram—and, while governments may favour removing encryption, there are privacy issues involved in doing so.
Perhaps, for now, WhatsApp etc can give users the option of forwarding messages for such quick fact-checking? Encouraging more fact-checking sites/bodies—both government-owned, and private ones—which also regularly disseminate information on fake messages is a good idea. While India’s intermediary guidelines say that messaging apps need to help trace the origin of offending messages, that needs more discussion, particularly since it is not clear this will not be used to target critics of the government; it would help if the government created an independent body/regulator that oversees requests to messaging apps, to ensure there is no unfair targeting. Calling out fake-news peddlers is critical; an ongoing study posted on Reddit has already flagged 18,000-plus twitter accounts that spread fake news that benefits the ruling party against nearly 150 doing it for the opposition Congress.