SZ digital summit on hate on the internet: “Anonymity not at any price” – Economy

The two discussants agree on at least one point: the biggest problem is still people.

The extent to which insults and threats occur online cannot be explained by irresponsible platforms or misguided algorithms. Behind every death threat there is a sender – and there is no panacea to protect those affected from it.

This statement would Josephine Balloon and Florian Gallwitz sign. But when it comes to assessment, countermeasures and consequences, the positions of the lawyer and the media information specialist diverge widely. This quickly becomes clear as the two talk about hate on the internet on the podium at the SZ digital summit.

“The picture is often painted in the media that people are being showered with hate and disinformation on social media,” says Gallwitz, who teaches as a professor at the Nuremberg University of Technology. “I think that’s an exaggeration. Let’s take Twitter, it’s like a huge party with hundreds of millions of participants. At every big party, some people are bullied. Emotions build up in some discussions, but that doesn’t necessarily mean there’s an orchestrated campaign behind it .”

Josephine Ballon (center) and Florian Gallwitz (right) discuss hate on the Internet at the SZ digital summit.

(Photo: Robert Haas/Robert Haas)

Ballon believes this comparison is trivializing. “Certain groups know exactly how they can exploit and manipulate the mechanisms of social networks,” says the co-director the organization Hate Aid, which advocates for those affected by digital violence and hate online. Right-wing extremist circles are calling for targeted attacks that are intended to unsettle and frighten other users.

Whether orchestrated or not, criminal hate comments are on the rise. The police crime statistics for 2023, which were presented at the beginning of the week, show almost 21,000 “insults with the means of crime”, an increase of around 19 percent. The number of unreported cases is large; only one percent of personal insults on the Internet are displayed.

Crime statistics remain abstract; they count cases without shedding light on the consequences for those affected. This becomes clear if you look at the study “Loud hatred – quiet retreat” reads that Hate Aid and other organizations published in February. According to this, almost one in two people have been insulted online, and a quarter of those surveyed have been threatened with physical violence. Young women, queer people or people with a visible migration background are particularly often affected. More in each case More than half of those surveyed take part in discussions less often due to fear of hostility or consciously use their words more cautiously.

“You just need a thick skin”

Gallwitz also recognizes this problem. “In individual cases this is painful for those affected,” he said in a conversation a few hours before the panel discussion. The mass of insults and threats can be stressful. “But if you express yourself politically in public, then you need a thick skin. Anyone who feels attacked too quickly is not cut out for this.”

For Gallwitz, the solution is not to delete more content or moderate it more consistently. On many things he does not share the same opinion as Twitter owner Elon Musk, who bought the platforms and renamed them X. “But I agree with him on one thing: A private corporation should not set the rules for a public discussion space.” If Gallwitz has his way, X, Instagram or Tiktok should comply with local laws: anything that is punishable in the respective country will be removed.

The reality is different. Most major platforms define in their terms of use where the limits of freedom of expression are. These rules also sanction some speech that does not clearly violate the law, such as medical misinformation, climate change denial, or spam comments.

Prosecution at the expense of anonymity?

“These terms and conditions are often very vague and poorly enforced,” says Hate Aid’s Ballon. “But fundamentally I think it’s right that platform operators not only strictly adhere to the legal framework.” After all, social networks are not neutral platforms, but constantly make content decisions by giving certain posts more reach and visibility.

Gallwitz, on the other hand, believes that corporations should stay out of tricky moderation decisions as much as possible. “Even Hate Aid is being sued in several instances in Germany,” he says. Obviously the following applies: ask three lawyers, get three opinions. “We cannot expect the platforms to automate German case law. Artificial intelligence is also not a solution for this; it is far too invasive and error-prone. Orwell is turning in his grave.”

Gallwitz sees Ballon’s demand that, if necessary, compromises in online anonymity when prosecuting criminal hate comments are similarly dystopian. “It must still be possible to express my opinion anonymously,” says the professor of media informatics. “Otherwise it will quickly become totalitarian.” The fear that everything you say online can later be attributed to you could discourage many people from speaking out on contentious issues.

She didn’t want to force anyone to register with a platform with an ID card, says Ballon. But there needs to be a discussion about how crimes can be solved. “Anonymity is important, but not at any price. Those affected must also be able to enforce their rights.”

source site