In the unceasing fight against fake news, Facebook has started to assign a reputation score to its user based on their “trustworthiness,” reports Washington Post.
The new rating tool revealed by Tessa Lyons, product manager and currently fighting misinformation on Facebook, is among the many other behavior clues that Facebook continuously take into consideration “as it seeks to understand risk.”
Earlier, Facebook heavily relied upon the judgment of users on the false news which proved out inefficient for external fact checkers. This was because users were reporting content that they didn’t like on a personal level.
“(It’s) not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher,” Lyons said.
Tessa believes that the new system will help Facebook make efficient use of fact checker and other analysis systems in taking down fake news.
As of now, it’s unclear what other criteria is taken into account before rating a user and what is the current score of each user. Lycon declined to go into the specifics as it might tip off bad actors. But, on the other hand, this does increase opacity when it comes to Facebook policies.
How does Facebook Rating system work?
The trustworthiness indicator, put up last year, ranks user from a scale of 0 to 1, depending upon the accuracy of the user reporting fake news.
Essentially, when you report an article, and if it indeed turns out to be fake news, Facebook would give you a high rating, further believing in your reporting in the future. On the other hand, if you are likely to report because you don’t concur with the content, Facebook will consequently give you a poor rating and the company will think twice before flagging that article.