False news on Facebook has long been a tool for economic or political gains, spammers can use it to drive clicks and yield profits. And the way it’s been used by adversaries in recent elections and amid ethnic conflicts around the world is reprehensible.
Facebook has been making significant investments to stop it from spreading and to promote high-quality journalism and news literacy. The company has announced a three pronged strategy to stop misinformation on Facebook -- remove accounts and content that violate our Community Standards or ad policies; reduce the distribution of false news and inauthentic content like clickbait; inform people by giving them more context on the posts they see.
By using this approach Facebook wants to roots out the bad actors that frequently spread fake stories.
And while false news does not violate Facebook’s Community Standards, it often violates company’s polices in other categories, such as spam, hate speech or fake accounts. For example, if a Facebook Page pretending to be run by Americans that’s actually operating out of Macedonia, that violates our requirement that people use their real identities and not impersonate others. So, Facebook will take down that whole Page, immediately eliminating any posts they made that might have been false.
A lot of the misinformation that spreads on Facebook is financially motivated, much like email spam in the 90s. If spammers can get enough people to click on fake stories and visit their sites, they’ll make money off the ads they show. By making these scams unprofitable, Facebook destroys their incentives to spread false news on Facebook.
“We’ve started penalizing clickbait, links shared more frequently by spammers, and links to low-quality web pages, also known as ‘ad farms’,” the company said.
Facebook also takes action against entire Pages and websites that repeatedly share false news, reducing their overall News Feed distribution. And since Facebook don’t want to make money off of misinformation or help those who create it profit, these publishers are not allowed to run ads or use monetisation features like Instant Articles.
Facebook has also partnered with third-party fact-checkers to review and rate the accuracy of articles and posts on Facebook. These fact-checkers are independent and certified through the non-partisan International Fact-Checking Network. When these organisations rate something as false, Facebook rank those stories significantly lower in News Feed. On average, this cuts future views by more than 80 per cent.
Even with these steps, Facebook believes people will still come across misleading content on Facebook and the internet more broadly. To help people make informed decisions about what to read, trust and share, the company is investing in news literacy and building products that give people more information directly in News Feed.
For example, Facebook recently rolled out a feature to give people more information about the publishers and articles they see, such as the publisher’s Wikipedia entry. Another feature, called Related Articles, displays articles from third-party fact-checkers immediately below a story on the same topic. If a fact-checker has rated a story as false, Facebook tells people who try to share the story know there’s more reporting on the subject.
“As we double down on countering misinformation, our adversaries are going to keep trying to get around us. We need to stay ahead of them, and we can’t do this alone. We’re working with our AI research team, learning from academics, expanding our partnerships with third-party fact-checkers, and talking to other organizations — including other platforms — about how we can work together” added Facebook.