(Bloomberg) -- Facebook Inc. said it’s rolling out a slew of new and expanded ways to rein in the spread of misinformation across its websites and apps, amid heightened global scrutiny of social networks’ actions to reduce false and violent content.
The company said Wednesday that the Associated Press will expand its role as part of Facebook’s third-party fact-checking program. Facebook also will reduce the reach of Groups that repeatedly share misinformation, such as anti-vaccine views, make Group administrators more accountable for violating content standards and allow people to remove posts and comments from Facebook Groups even after they leave it.
Facebook’s executives for years have said they’re uncomfortable choosing what’s true and false. Under pressure from critics and lawmakers in the U.S. and elsewhere, especially since the flood of misinformation during the 2016 U.S. presidential campaign, the social media company with 2 billion users has been altering its algorithms and adding human moderators to combat false, extreme and violent content.
“There simply aren’t enough professional fact-checkers worldwide and, like all good journalism, fact-checking takes time,” Guy Rosen, Facebook’s vice president of integrity, and Tessa Lyons, head of news feed integrity, wrote in a blog post. “We’re going to build on those explorations, continuing to consult a wide range of academics, fact-checking experts, journalists, survey researchers and civil society organizations to understand the benefits and risks of ideas like this.”
While Facebook has updated its policies and efforts, content that violates the company’s standards persists. Most recently, the social network was criticized for not quickly removing the video of the mass shooting in New Zealand that was live streamed.