Facebook strengthens the moderation of false information in its groups

Facebook strengthens moderation of false information in its groups

UPDATE DAY

Meta on Thursday added a tool that allows Facebook group administrators to automatically filter content that has been recognized as false information, in the run-up to the U.S. midterm elections, conducive to waves of disinformation on the social network. 

“To ensure that content is more reliable (…), group administrators can automatically put on hold messages containing information determined to be false by third-party verifiers, so that they can review them before deleting them”, explained Tom Alison, the director of the Facebook application, in a press release on Thursday.

The platform had already given more tools to group managers to better moderate content, but remains accused by many NGOs and authorities for not sufficiently combating misinformation.

More than 1.8 billion people use Facebook Groups every month. Parents of students, fans of artists and neighbors gather there to exchange news and organize activities, but also to discuss politics. 

Meta has been criticized for not being sufficiently policing groups that have contributed to the political radicalization of certain individuals, particularly during the 2020 US elections. 

AFP participates in some thirty countries in “Third party fact-checking”, a third-party verification program developed by Facebook since 2016. About sixty media around the world, generalist or specialized, are also part of this program.

If information is diagnosed as false or misleading by the one such medium, Facebook users are less likely to see it appear in their News Feed. And if they see it or try to share it, Facebook suggests that they read the verification article.