Social Media: Are Social Bots Dangerous? – digital

They are commonplace in social networks, but they are hardly noticeable: social bots. Automated accounts that independently post, like or share content on platforms such as Twitter or Facebook. It is not always evident that there is no real user behind the content, but rather an automatically running program. Critics therefore consider social bots to be dangerous. Others doubt that automated accounts actually affect real users. For example Simon Hegelich, Professor of Political Data Science at the Technical University of Munich. He explains what to watch out for with bots.

SZ: Professor Hegelich, how can users even recognize a social bot?

Simon Hegelich: Many bots do not hide the fact that they produce automated content. For example, there are bots on Twitter that automatically send earthquake warnings or the weather report. But there are also automated accounts that are not marked. Many studies therefore try to identify these accounts by patterns. For example, what similarities there are between bots in general and whether this also applies to a specific account. However, the definition of social bots is not clear. This can then result in high error rates.

The political scientist Simon Hegelich conducts research on political data science at the TU.

(Photo: private)

Automatic earthquake warnings – that sounds helpful. Nevertheless, critics warn that automatically generated content is dangerous.

It depends on how you use social bots. I use a social bot for my Twitter account myself. When I write something on my blog, the bot automatically posts a tweet about it. It’s not difficult at all, everyone could set it up for themselves. It becomes problematic when social bots post or share political topics. The fear is that this could influence the opinion of the users.

Has this fear been confirmed?

Manipulating political opinions is generally very difficult. Just because a bot shares #merkelmussweg a thousand times over, for example, doesn’t mean anyone changes their mind. So social bots certainly cannot develop a political story that influences others. But they can reinforce signals in political discourse. So that, for example, a large number of users are shown the same message. The real danger is when this message is seen as a majority opinion and the users’ perception is distorted. This is reinforced by how social networks work.

How is social media working a problem?

One must not forget that Twitter and Co. are private corporations that pursue economic interests. This means that the focus is on content that is often shared. Most of the time, these are topics that many get upset about without having to deal with them more intensively. With social bots, these topics can then artificially get even more attention. You have to ask yourself whether we want it that way or whether it can damage democracy.

Could mandatory labeling of bots solve this problem?

It would be very easy to tag any automated content. But that would mean, for example, that news agencies would have to flag all automatic reports. Or professionally operated accounts that automatically post content at the same time – that would also have to be specified. What is the use of this label then? Just because something was created automatically doesn’t mean it is less believable.

How can then be prevented that the user’s perception is distorted?

In general, it cannot be proven with retrospect whether a social bot has distorted perception or not. But the main point is: As a user, I also have to think about it myself. Make it clear to me that what happens on social media doesn’t necessarily make up the real world. The question I therefore ask myself is: is it appropriate for so much political communication to take place on platforms that are not suitable for this? Don’t we actually need something like a public social media platform for this? A platform that also shows content that has not been shared often. Then social bots would be ineffective.

.
source site