In the wake of a contentious election, the micro-blogging site has seen its criticism grow for failing to curb harassment and abusive language. Since Donald Trump’s win, reports of racist attacks and hate speech on Twitter, as well as other social media platforms, have flooded newsfeeds.
The tech company was first pushed to assess its policies when trolling aimed at Saturday Night Live comedian Leslie Jones grew so offensive she quit Twitter and was subsequently hacked.
“The amount of abuse, bullying, and harassment we’ve seen across the Internet has risen sharply over the past few years,” says the San Francisco-based tech company in its Tuesday statement. It then references all abusive conduct as threatening to “human dignity, which we should all stand together to protect.”
Admitting that Twitter being in public and in real-time has posed as a challenge when it comes to policing the abuse, the company says it took a step back to find a new approach and landed on three areas: Controls, reporting and enforcement.
Under the new guidelines, Twitter is expanding its “mute” feature to apply to Notifications.
“We’re enabling you to mute keywords, phrases, and even entire conversations you don’t want to see notifications about, rolling out to everyone in the coming days,” says the company.
The next change comes with Twitter’s hateful conduct policy and its inability to prohibit users who target other users based on their race, ethnicity or gender identity, among others. Twitter said it is now providing users with a more direct way to report this type of content in real time, though it did not specify how in the statement.
As for the enforcement category, the company said it retrained its support team and improved internal tools.
“We don’t expect these announcements to suddenly remove abusive conduct from Twitter,” the company concludes. “No single action by us would do that. Instead we commit to rapidly improving Twitter based on everything we observe and learn.”