Status: 03/06/2023 10:51 a.m
Climate change, Corona and the war in Ukraine – the amount of disinformation increases with every crisis. Artificial intelligence could be a solution to deal with it. But she has a crucial sticking point.
For more than a year, Russia has been waging a war of aggression in Ukraine. The amount of disinformation about it on social media is massive. They are distributed via influencers, bots or fake accounts. Every day, billions of users post content on social media platforms such as Twitter, Instagram, Facebook and TikTok, and time and again spread false information. However, it is only disinformation when false information is purposefully applied, i.e. there is an intention behind the dissemination.
Finding disinformation on social media platforms is like looking for a needle in millions of haystacks – a never-ending quest. Nobody has been able to keep up with it for a long time. Artificial intelligence seems to be the only and perhaps saving solution. But where does this lead?
According to Andreas Dengel, Executive Director of the German Research Center for Artificial Intelligence (DFKI), artificial intelligence is the simulation of intelligent behavior. “Ultimately, it’s about enabling a machine to do things that humans normally need intelligence to do.”
Examination of network or content
In order for AI to be able to recognize disinformation, it is trained with data sets. The data sets are created by humans and with them the AI learns what counts as disinformation and what doesn’t.
Disinformation can be recognized by different characteristics. “On the one hand, it’s about To develop AI that is able to recognize moods, i.e. emotions, right through to expressions of opinion,” says Dengel from DFKI. AI analyzes content for this, i.e. texts, videos or images. Cross-references are particularly difficult. Image and text can go together take on a whole new meaning.
On the other hand, network structures can also provide an indication of disinformation. Alexander Schindler is a senior researcher at the Austrian Institute of Technology (AIT): “There are telltale communication patterns, for example fake news bots, i.e. automated accounts in social networks that spread disinformation or propaganda.” AI can also analyze the nodes between user accounts or the reactions to a post. With a resonance analysis you can see whether users react neutrally to content, criticize or confirm it.
AI only as strong as the concept
However, disinformation in itself is a weak concept. That means: “Disinformation can hardly be defined,” says Schindler. “The definition depends on so many factors, such as political or religious views, that a uniform or standardized labeling is hardly possible.”
Current research projects on artificial intelligence in German-speaking countries are therefore more concerned with supporting the work of people. One idea from the AIT, for example, is to give content a nutritional value table – similar to what happens with food. The Correctiv research center is also working in cooperation with the Ruhr University Bochum and the Technical University of Dortmund on a project in which AI is intended to support participants in classifying information.
Platforms work with a mix of measures
The social media platforms can hardly be looked at in the cards. Only tendencies are recognizable. “They work from a mix of different technologies,” says Dengel from the DFKI. “It also varies from platform to platform. Above all, there is a lot of experimentation.” Meta, Twitter or TikTok have a great interest in making progress with the development of AI – among other things, numerous jobs could be saved with their use.
However, if there is AI on social media platforms that detects disinformation without a human double-checking the decision, the methods would have to be very accurate. “And that can probably only be achieved if the scope of action is severely restricted,” says Schindler from the AIT. This means that disinformation would be recognized above all if the cases involved were clear. Other cases, more subtle, would slip through. Whether and how this technology is used is opaque from the outside.
The EU passed the law on digital services so that social media platforms will have to publish more information in the future and thus become more transparent. The decree contains numerous innovations. From February 2024, for example, platforms will have to inform users why they have blocked or deleted their content. If the platform operators do not comply with these requirements, penalties may be imposed.
Definition of disinformation essential
But even with this part there could be problems, because AI can currently only explain itself insufficiently. “The system classifies something as disinformation, but it cannot explain why,” says Dengel, Managing Director of DFKI. “Perhaps the data used to train the AI was distorted or incomplete, or the examples were not balanced. Systems that explain or justify their decisions are currently being intensively researched.”
However, that could be a big problem for the future of AI against disinformation. Because content cannot simply be deleted or restricted in its visibility – especially not without further justification.
Ultimately, however, there is no way around artificial intelligence. But: “The assumption that AI is the salvation – that’s just wrong,” says Schindler from AIT. After all, at detecting disinformation, AI is only as good as the guidelines it receives. They cannot do the most important work, namely the definition of disinformation, for the developers.