EU wants to increase pressure on Instagram, TikTok and X


exclusive

As of: February 20, 2024 12:00 p.m

A new EU database shows what content networks like TikTok and Instagram delete or restrict. Around five months after its launch, experts are praising the initiative, but are calling for even more transparency.

An initial evaluation shows that the social media platform TikTok apparently deletes significantly more content than Instagram or X or that “hate speech” plays a major role on the platforms new EU database through NDR and Süddeutsche Zeitung.

The Internet platforms have already fed in more than four billion entries since the end of September. Specifically, the platforms indicate in the database which content they have deleted or restricted its visibility and for what reason. The database can be evaluated using a dashboard. There you can filter by categories such as “hate speech”, “violence”, “pornography” or “influencing elections”.

16 major platforms have been reporting their moderation decisions for almost five months. With the “Digital Services Act” that recently came into force, reporting has now become mandatory for other providers.

Scientist Jakob Ohme from the Weizenbaum Institute, who conducts extensive research on disinformation, rates the newly created database as a “first and foremost really good instrument.” Above all, the fact that interested citizens, organizations or scientists can now compare the platforms directly with each other is an important step.

Pornography on Instagram, violence on TikTok

The new database is intended to make it transparent what problems the platforms have or where they themselves focus on moderation. This is also about building up pressure and showing how moderation is taking place and where weaknesses lie, says Matthias C. Kettemann, a legal scholar at the Leibniz Institute for Media Research who is researching the “Digital Services Act”.

The evaluation of the data and the comparison of the platforms with each other could serve to provide information on further regulation or improvement. By far most of the reports come from the provider Google Shopping.

Millions of TikTok content deleted versus 650,000 on X

The database also documents that more than 200 million posts have been deleted on the social media platform TikTok since the reporting requirement was introduced at the end of September. That’s around 70 percent of all content reported by TikTok. In comparison, there are only around five million deleted videos, accounts and photos on Instagram. This corresponds to around a third of all content reported by Instagram.

According to the database, only 648,000 pieces of content were reported on the online platform “Elon Musk laid off many employees from the area very early on and the result is that Twitter has become a worse platform with more hate speech,” comments Kettemann.

Significant differences between platforms

The EU database also shows why internet platforms delete certain content or limit visibility. The largest category on Instagram and TikTok is “Scope of platform service”.

Here the platforms summarize all decisions that have something to do with the general regulations, such as a violation of “age restrictions” or “nudity”. This is followed by “Illegal or hurtful speech” as a large category.

In this respect the two platforms differ. Instagram seems to limit “pornography” in particular, while TikTok limits “violent content”. “That of course also says something about the platforms, for example on TikTok, that they concentrate on entertaining content and therefore moderate it more strictly,” says scientist Kettemann.

Upon request from NDR and SZ on the platforms, TikTok says it moderates based on its own community guidelines and constantly gets feedback from independent organizations. Meta writes that they welcome the “Digital Services Act” and rely on extensive human and machine content moderation. X did not respond when asked.

Using AI to cover up the real problems?

In order to cope with the sheer volume and make deletion decisions more quickly, Meta on Facebook and Instagram as well as TikTok are increasingly using automated content recognition.

Some observers see artificial intelligence as a possible savior. “In my opinion, this ‘shiny’ topic is often pushed forward by corporations in order to cover up the real and major problems in content moderation,” criticizes Julia Kloiber from Superrr Lab, which represents the interests of content moderators of the large networks and, above all, demands better working conditions.

As an example of how AI doesn’t yet work particularly well, especially with videos, she cites a video from about a year ago that went viral and disturbed many users of a platform: a man putting a cat in a blender. The animal dies in it.

“Content moderators didn’t get this video removed for days,” she reports. The video was repeatedly re-uploaded and ultimately had to be removed manually.

The EU database does not provide any information about how well or poorly artificial intelligence now works. Although the companies disclose whether they made an “automated decision” when moderating content, this says nothing about whether algorithmic processes such as word filters or artificial intelligence were used. “Unfortunately, you can’t delve particularly deeply into the matter, as there are no concrete examples,” says legal scholar Matthias Kettemann. The database is a self-disclosure.

Problem of self-disclosure

He and other universities and organizations, such as Hate Aid or the Weizenbaum Institute, are committed to providing direct access to the Internet platforms, with the help of which independent analyzes could then be carried out.

The platforms have already set these up, at least in part. Access will come as soon as the “Digital Services Act” is passed in Germany. The Digital Ministry expects the process to be completed by the end of May at the latest.

Matthias Reiche, ARD Brussels, tagesschau, February 20, 2024 12:23 p.m

source site