Facebook Inc. said it\u2019s rolling out a slew of new and expanded ways to rein in the spread of misinformation across its websites and apps, amid heightened global scrutiny of social networks\u2019 actions to remove false and violent content. The company said Wednesday that the Associated Press will expand its role as part of Facebook\u2019s third-party fact-checking program. Facebook also will reduce the reach of Groups that repeatedly share misinformation, such as anti-vaccine views; make Group administrators more accountable for violating content standards; and allow people to remove posts and comments from Facebook Groups even after they\u2019re no longer members. Also read:\u00a0This could be our very first glimpse of OnePlus 7 Pro, its specifications Facebook\u2019s executives for years have said they\u2019re uncomfortable choosing what\u2019s true and false. Under pressure from critics and lawmakers in the U.S. and elsewhere, especially since the flood of misinformation during the 2016 U.S. presidential campaign, the social media company with 2 billion users has been altering its algorithms and adding human moderators to combat false, extreme and violent content. \u201cThere simply aren\u2019t enough professional fact-checkers worldwide and, like all good journalism, fact-checking takes time,\u201d Guy Rosen, Facebook\u2019s vice president of integrity, and Tessa Lyons, head of news feed integrity, wrote in a blog post. \u201cWe\u2019re going to build on those explorations, continuing to consult a wide range of academics, fact-checking experts, journalists, survey researchers and civil society organizations to understand the benefits and risks of ideas like this.\u201d While Facebook has updated its policies and efforts, content that violates the company\u2019s standards persists. Most recently, the social network was criticized for not quickly removing the video of the mass shooting in New Zealand that was live streamed. The U.S. 2020 elections will be a test for the new efforts, which come after the platform was used by Kremlin-linked trolls in the leadup to voting in 2016 and 2018. The scope of election integrity problems is "vast," ranging from misinformation designed to suppress voter turnout to sophisticated activity "trying to strategically manipulate discourse on our platforms," said Samidh Chakrabarti, a product management director at Facebook. Facebook is looking to crack down on fake accounts run by humans. "The biggest change since 2016 is that we\u2019ve been tuning our machine learning systems to be able to detect these manually created fake accounts," Chakrabarti said, adding that the platform removes millions of accounts - run by both bots and humans - each day. The Menlo Park, California-based company has made progress in detecting and removing misinformation designed to suppress the vote - content ranging from fake claims that U.S. Immigration and Customs Enforcement agents were monitoring the polls to the common tactic of misleading voters about the date of an election. Facebook removed 45,000 pieces of voter-suppression content in the month leading up to the 2018 elections, 90 percent of which was detected before users reported it. "We continue to see that the vast majority of misinformation around elections is financially motivated," said Chakrabarti. As a result, efforts to remove clickbait benefit election integrity, he said.