Twitter forced to reveal to justice that it employs only “1,867 moderators” in the world

This is a figure that the social network has refused to reveal until now. Twitter was forced by the French justice to reveal the number of people assigned to the moderation of its content. Assigned in summary by several anti-racist associations for breaches of its obligations of moderation in terms of hate messages, the platform officially indicated on Thursday that it employed “1,867 moderators in the world”, for approximately 400 million monthly users, that is to say a moderator for 200,000 Internet users.

“Despite a decision by the Paris court ordering him to disclose all documents detailing his means of fighting moderation against online hatred in France, Twitter has once again defiled. The social network refused to say how many moderators it employed in France, what their training was, how its algorithms worked… contenting itself with just saying that it had recourse to less than 2,000 moderators in the world ”, explains to 20 minutes Samuel Lejoyeux, president of
UEJF (Union of Jewish Students of France), which with SOS Racisme, Licra, SOS Homophobie, J’accuse and Mrap lodged a complaint in May 2020 against the platform.

Twitter favors moderation “by algorithms”

The social network defended, during its hearing held this Thursday at the Paris Court of Appeal, its hate content moderation system. “We recently doubled the number of people responsible for enforcing our rules. At Twitter, 1,867 people are dedicated exclusively to enforcing our policies and moderating content. This figure represents more than a third of all our global workforce ”, we can read in the conclusions of Twitter International Company. Figures that were also mentioned in a report sent to the CSA (Superior Audiovisual Council) in September 2021, but which had never been publicly disclosed until today.

Twitter justifies its low number of human moderators by using artificial intelligence, which is considered more effective. “We will not solve the challenge of large-scale moderation with only more human resources. We have found that we are much more effective in the fight against harmful content by using more technology and proportionately increasing our teams, ”explained Twitter in this report on the fight against disinformation published by the CSA.

Only 12% of hate content removed during the first lockdown

The UEJF, SOS Racisme or SOS Homophobie had decided to file a complaint after noting in 2020 a 43% increase in hate content on Twitter during the period of the first confinement. According to a study conducted from March 17 to May 5, 2020 by these associations, “the number of racist content increased by 40.5% (over the period), that of anti-Semitic content by 20% and that of LGBTphobic content by 48%” . The associations had also reported to the social network 1,110 hateful tweets, mainly homophobic, racist or unequivocal anti-Semitic insults, and noted that only 12% of them had been deleted in “a reasonable period ranging from 3 to 5 days”.

“Twitter shows no real desire to fight hate on its platform (racism, anti-Semitism, homophobia). Everyone can see it, every day by going to the platform. We want the social network to comply with French law. We demand transparency, and precise information on their daily moderation ”, specifies the president of the UEJF, very confident on the judgment which will be rendered on January 20th.

Regularly accused of hosting or contributing to the dissemination of hateful or violent content, the major content platforms have been encouraged to set up filtering algorithms, reporting procedures and teams of moderators. But Twitter has refused for years to communicate on the means implemented concerning its moderation. The social network, however, assured to invest in moderation technologies “to reduce the burden on users of having to make a report”, specifying that “more than one in two tweet on which we act for abuse” now comes from automatic detection rather than reporting.


source site