The world’s largest digital advertising companies reined in their automated money-making machines after the systems were shown to spit out ads based on racist and other offensive information. Facebook Inc. shut off a key self-service ad tool, while Google stopped its main Search ad system automatically from suggesting offensive phrases for targeting. The moves are the latest sign of rising scrutiny of the largest U.S. internet companies and how their software-driven services and ad businesses are influencing society. The companies have thrived on their ability to offer targeted ads on a massive scale across huge audiences without much human intervention. This week, several news organizations showed they could buy ads based on racist and antisemitic terms or categories. The biggest advertisers are unlikely to run marketing campaigns like this, but it shows how these systems are open to abuse and may require more hands-on monitoring. “These tools are so easy to use that, without trying very hard, it’s relatively easy to expose the downsizes of automated ad sales,” said Brian Wieser, a Pivotal Research Group analyst and critic of Facebook and Google.
Facebook said advertisers will no longer be able to target people by how they describe their education or employer after finding that some were filling in those fields with offensive content. The social networking company will remove targeting by self-reported education, field of study, job title and employment fields in user profiles until it can fix the problem in its self-service advertising system. The decision came after investigative news site ProPublica found advertisers could target users who express interest in anti-Semitic categories like “Jew haters.”
“We are removing these self-reported targeting fields until we have the right processes in place to help prevent this issue,” the company said. The system had automatically been populating interest categories based on what community members post about themselves. “We prohibit advertisers from discriminating against people based on religion and other attributes,” the company said. “However, there are times where content is surfaced on our platform that violates our standards. We know we have more work to do.”
Facebook software creates targeting categories for advertisers automatically, and the company adjusts them after problems are noticed by people. Facebook has run into similar issues with this type of reactionary enforcement before, both in its ad business and consumer-facing services. Its live video service has shown murders or suicides with enough time to go viral before being noticed by the company and taken down. Congress is investigating how Facebook’s ad systems were used, likely by Russia-based entities, to influence the 2016 U.S. Presidential election.
Google’s AdWords system, one of the most-profitable businesses ever created on the internet, was found wanting in a similar way. It runs ads based on phrases, or keywords, that people type into the company’s search engine. This is very useful for companies selling shampoo or clothes, but a Buzzfeed report on Friday highlighted how it can work with extremist terms, too. Buzzfeed showed how marketers running Search ads against offensive search terms like “black people destroy everything,” are automatically fed other racist suggestions. Alphabet Inc.’s Google blocked several of the ads from running, but not all.
“In this instance, ads didn’t run against the vast majority of these keywords, but we didn’t catch all these offensive suggestions. That’s not good enough and we’re not making excuses,” Sridhar Ramaswamy, Google’s ads chief, said in a statement. “We’ve already turned off these suggestions, and any ads that made it through, and will work harder to stop this from happening again.”
Earlier this year, the company was battered by an advertising boycott of its YouTube online video service. Marketers were concerned about ads appearing next to offensive videos. With YouTube, ads can run on a wide, unpredictable range of videos. With Search, advertisers have tighter control over which keywords they choose by buy ads against.
Flaws in Twitter Inc.’s automated ad system were also exposed on Friday. The social media company’s platform tells marketers it has millions of users interested in terms like “wetback,” “Nazi” and the N word, The Daily Beast reported. The publication ran ads targeting users who the system said were likely to respond to the terms, and Twitter’s software didn’t ask to approve the campaigns before they ran, the news site said.
Twitter said the terms used in the Daily Beast story have been blacklisted for several years and the company is looking into how the publication was able to put the ads on the social network. “Twitter actively prohibits and prevents any offensive ads from appearing on our platform,” the company said in an email.