Meta says it has strengthened the moderation of its content

After being called to order by the European Commission, Meta announced this Friday that it had significantly strengthened the moderation of content posted on its platforms, notably Facebook, and removed hundreds of thousands of messages since the start of the war between Israel and Hamas.

The group, which controls, in addition to Facebook, the social networks Instagram and Threads, as well as WhatsApp messaging, has set up a dedicated unit, made up in particular of people speaking Arabic and Hebrew. “This allows us to more quickly remove content that violates our policies and guidelines, and provides an additional line of defense against misinformation,” Meta said in a message posted on its site Friday.

Some 795,000 messages deleted or dressed with a warning

During the three days following Hamas’ surprise offensive on Israeli territory on Saturday, Meta said it deleted or covered up with a warning message some 795,000 messages in Arabic or Hebrew. The number of daily messages in Arabic or Hebrew that are actively moderated represents seven times the volume observed, on average, during the previous two months.

Meta says it is preventing more posts from being recommended by users when their content “potentially” violates the group’s guidelines or is even considered “borderline.”

Removal of content allowing hostages to be identified

The social media giant also decided to remove any content likely to identify one or more of the hostages currently held by Hamas, “even if it is to condemn (the hostage-taking) or to attract pay attention to this situation.

Another measure, Meta has restricted the distribution on Instagram of certain keywords, which do not appear when a user does a search.

Brussels warned Meta on Wednesday about the increase in false information and called on its boss, Mark Zuckerberg, to take measures to remedy it. European Digital Commissioner Thierry Breton also issued similar warnings to X (formerly Twitter) and TikTok.

source site