To help curb foreign interference in public discourse, social media giant will use machine learning to help identify and demote foreign Pages that are likely to spread financially-motivated hoaxes to people in other countries.

Over the last year and half, Facebook has been actively committed to fighting false news through a combination of technology and human review, including removing fake accounts, partnering with fact-checkers, and promoting news literacy.

The company is also expanding its third-party fact-checking programme to 14 countries and have plans to scale to more countries by the end of the year. These certified, independent fact-checkers rate the accuracy of stories on Facebook, helping reduce the distribution of stories rated as false by an average of 80 per cent.

One significant challenge in fighting misinformation is that it manifests itself differently across content types and countries and across different mediums such as photos and videos. To address this, Facebook has expanded test to fact-check photos and videos to four countries. This includes those that are manipulated (e.g. a video that is edited to show something that did not really happen) or taken out of context (e.g. a photo from a previous tragedy associated with a different, present day conflict).

“And with more than a billion pieces of content posted every day, we know that fact-checkers can’t review every story one-by-one. So, we are looking into new ways to identify false news and take action on a bigger scale. Machine learning helps us identify duplicates of debunked stories,” Facebook said in a blog post.  

The company will also soon start working with fact-checking partners to use Schema.org‘s Claim Review, an open-source framework used by various technology companies and fact-checking organisations. This will make it easier for fact-checkers to share ratings with Facebook and help the social network respond faster, especially in times of crisis.