In a first, Facebook walks users through guidelines to remove content from its platform

By: |
Published: April 25, 2018 3:52:14 PM

Facebook says that while the policies are “only as good as the strength and accuracy” of the enforcement, it has a rather shoddy enforcement on its platform.

Facebook has a robust system of removing the content that conspicuously contests the standards against nudity, sexual violence, pornography, child abuse, and hate speech. (Source: Reuters)

Facebook has for the first time publicly disclosed the guidelines it follows to remove content from its platform. The posts, photos, videos, and other contents that are posted on Facebook every second pass through an algorithm designed by the company to filter out rogue and objectionable content. However, there have been occasions when Facebook has witnessed outcry over the content that was mistakenly deleted for nudity, online extremism, child abuse, hate speech, and graphic violence. Facebook is now explaining how it’s done, alongside building a system for people to appeal the wrong removal of content.

In a blog post, Facebook’s Vice President of Global Policy Management, Monika Bickert wrote that Facebook is now publishing the internal guidelines that are used to enforce the standards on the content available on the platform. Facebook has a robust system of removing the content that conspicuously contests the standards against nudity, sexual violence, pornography, child abuse, and hate speech.

Bickert believes now that the guidelines are out in public, it will cater to the users for two reasons. “First, the guidelines will help people understand where we draw the line on nuanced issues. Second, providing these details makes it easier for everyone, including experts in different fields, to give us feedback so that we can improve the guidelines – and the decisions we make – over time,” noted Bickert.

Notably, the guidelines that Facebook uses to flag its content and subsequently delete it were first brought to notice by The Guardian, which got a hold of the leaked copies last year. Facebook is now more explicit about its standards, following the humongous scandal that made Mark Zuckerberg testify before the congressional session in the US over the privacy of users. The guidelines now in public, however, hardly do better than what Facebook has been moderating so far.

The convoluted procurements define the extent of wide topics that are deemed unfit for the platform. For example, Facebook has stressed (a lot) in the guidelines that any content that shows sexual intercourse with human genitalia will not be allowed on the platform. However, at the same time, it saves the advertisements or some context promoting sexual education or health from the removal. What it does not demarcate is a lot of advertisements that sell prostitution and other kinds of porn. Moreover, while the sexual organs and other ‘hidden’ body parts in an image are not allowed, the photographs of celebrities or politicians doctored with them are presumably allowed.

Coming to hate speech, Facebook classifies it into three tiers where the tier-1 denotes the most extreme type and tier-3 is the least extreme. Facebook has grouped postings, videos, and photos showing violent speech; crime; attack on race, caste, creed, ethnicity, gender, sexual orientation; and more under the tier-1. The tier-3 basically has everything that is moderately insinuating to others. Facebook says it also has some provisions to protect the immigration status but at the same time, it allows criticism of the policies promoting it, as well as the countermeasures, suited best in the interest of holding these policies back. As ambiguous as it can get, Facebook is still very far from explicitly telling whether the posts praising immigration are against the standards or the ones opposing it.

Facebook says that while the policies are “only as good as the strength and accuracy” of the enforcement, it has a rather shoddy enforcement on its platform. “We use a combination of artificial intelligence and reports from people to identify posts, pictures or other content that likely violates our Community Standards,” says Facebook. Facebook has over 7,500 people deployed to review the content “24/7” in over 40 languages under the Community Operations team.

However, despite having a large team to weed out the wrong content from the platform, Facebook believes that it “needs to do more”. Facebook is building a new platform to let people appeal for their content that has been removed for “nudity / sexual activity, hate speech or graphic violence”. Facebook has also explained how to do that:

If any content is removed from Facebook on the basis of community standards, the user will be notified of this action, along with the option to “request additional review”. This request will be subject to a review by the Facebook’s team within 24 hours. If Facebook thinks that the content was removed mistakenly, it will notify the user of the same and restore the deleted post, photo, or video.

Get live Stock Prices from BSE and NSE and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know market’s Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

Switch to Hindi Edition
FinancialExpress_1x1_Imp_Desktop