Prime minister Theresa May’s political fortunes may be waning in Britain, but her push to make internet companies police their users’ speech is alive and well. In the aftermath of the recent London attacks, May called platforms like Google and Facebook breeding grounds for terrorism. She has demanded that they build tools to identify and remove extremist content. Leaders of the Group of 7 countries recently suggested the same thing. Germany wants to fine platforms up to 50 million euros if they don’t quickly take down illegal content. And a European Union draft law would make YouTube and other video hosts responsible for ensuring that users never share violent speech.
The fears and frustrations behind these proposals are understandable. But making private companies curtail user expression in important public forums—which is what platforms like Twitter and Facebook have become—is dangerous. The proposed laws would harm free expression and information access for journalists, political dissidents and ordinary users. Policy makers should be candid about these consequences and not pretend that Silicon Valley has silver-bullet technology that can purge the internet of extremist content without taking down important legal speech with it.
Platforms in Europe currently operate notice-and-takedown systems for content that violates the law. Most also prohibit other legal but unwelcome material, like pornography and bullying, under voluntary community guidelines. Sometimes platforms remove too little. More often, research suggests, they remove too much—silencing contested speech rather than risking liability. Accusers exploit this predictable behaviour to target expression they don’t like—as the Ecuadorean government has reportedly done with political criticism, the Church of Scientology with religious disputes and disgraced researchers with scholarship debunking their work. Germany’s proposed law increases incentives to err on the side of removal: Any platform that leaves criminal content up for more than 24 hours after being notified about it risks fines as large as 50 million euros.
European politicians tout the proposed laws as curbs on the power of big American internet companies. But the reality is just the opposite. These laws give private companies a role—deciding what information the public can see and share—previously held by national courts and legislators. That is a meaningful loss of national sovereignty and democratic control.Moving this responsibility from state to private actors also eliminates key legal protections for internet users. Private-platform owners are not constrained by the First Amendment or human rights law the way the police or courts are. Users most likely have no remedy if companies are heavy-handed or sloppy in erasing speech. Governments that outsource speech control to private companies can effectively achieve censorship by proxy.
Proposed laws making platforms go beyond notice and takedown to proactively police users’ speech would be even worse than Germany’s draconian takedown proposals. About 300 hours of video are uploaded to YouTube every minute, so reviewing it is not humanly possible. Courts including the European Union Court of Justice and European Court of Human Rights have recognised that users’ speech and privacy rights will suffer if platforms must vet every word they post. And studies suggest that ordinary internet users self-censor when they think they are being surveilled. Researchers found journalists afraid to write about terrorism, Wikipedia users reluctant to learn about Al Qaeda and Google users avoiding searching for sensitive terms in the wake of the Snowden revelations.
Some politicians say the solution is to build filters—software that automatically identifies and suppresses illegal material. But no responsible technologist believes that filters can tell what speech is legal. Skilled lawyers and judges struggle to make that kind of call. What real-world filters can do, at best, is find duplicates of particular text, pictures or videos—but only after human judgement has determined that they are illegal. Filters that can find child sexual abuse images work relatively well because those images are illegal in every instance.
But violent and extremist material is different. Almost any such image or video is legal in some context. Filters can’t tell the difference between footage used for terrorist recruitment and the same footage used for journalism, political advocacy or human rights efforts. When filters fail to make those distinctions, they will take down information and discussion on topics of vital public importance. Risk-averse companies erring on the side of over-removal for this kind of speech will disproportionately silence Arabic speakers and Islamic religious material.
As a lawyer with long experience handling takedowns from Google web searches, I believe that there are responsible ways to remove illegal content from platforms. A good start is to have courts decide what violates the law—not machines and not company employees operating under the threat of huge fines. Accused speakers should have opportunities to defend their speech, and the public should be able to find out when content disappears from the internet.
If politicians think that eroding online expression rights will make us safer, they should explain how. For all the rhetoric, we know very little about whether curbing online speech prevents real-world violence. What little research we have suggests that driving violent or hateful material into dark corners of the web may make matters worse.
Outraged demands for “platform responsibility” are a muscular-sounding response to terrorism that shifts public attention from the governments’ duties. But we don’t want an internet where private platforms police every word at the behest of the state. Such power over public discourse would be Orwellian in the hands of any government, be it May’s, Donald Trump’s or Vladimir Putin’s.
Director, intermediary liability, Stanford Law School’s Center for Internet and Society NYT