Is artificial intelligence the saving solution against disinformation?


context

As of: March 6, 2023 10:51 a.m

Climate change, Corona and the war in Ukraine – the amount of disinformation increases with every crisis. Artificial intelligence could be a solution to deal with this. But it has a crucial sticking point.

By Marleen Wiegmann, tagesschau.de

Russia has been waging a war of aggression in Ukraine for more than a year. The amount of disinformation about this on social networks is massive. They are spread via influencers, bots or fake accounts. Every day, billions of users post content on social media platforms such as Twitter, Instagram, Facebook and TikTok and repeatedly spread false information. However, it is only considered disinformation when false information is deliberately spread, i.e. there is an intention behind its spread.

Finding disinformation on social media platforms is therefore like looking for a needle in a haystack – a never-ending task. Artificial intelligence could help here. But where does this lead?

According to Andreas Dengel, Managing Director of the German Research Center for Artificial Intelligence (DFKI), artificial intelligence is the simulation of intelligent behavior. “Ultimately it’s about enabling a machine to do things that humans normally need intelligence for.”

Network or content investigation

In order for AI to detect disinformation, it is trained with data sets. The data sets are created by humans and the AI ​​uses them to learn what is considered disinformation and what is not.

Disinformation can be recognized based on different characteristics. “On the one hand, it’s about “To develop AI that is able to recognize moods, i.e. emotions and even expressions of opinion,” says Dengel from DFKI. To do this, AI analyzes content, i.e. text, videos or images. Cross-references are particularly difficult. Image and text can be combined take on a whole new meaning.

On the other hand, network structures can also provide an indication of disinformation. Alexander Schindler is a senior researcher at the Austrian Institute of Technology (AIT): “There are telltale communication patterns, for example from fake news bots, i.e. automated accounts in social networks that spread disinformation or propaganda.” AI can also analyze the nodes between user accounts or the reactions to a post. A resonance analysis shows whether users react neutrally to content, criticize it or confirm it.

AI is only as strong as the concept

However, disinformation in itself is a weak concept. This means: “Disinformation is hardly definable,” says Schindler. “The definition depends on so many factors, for example political or religious views, that a uniform or standardized label is hardly possible.”

Current research projects on artificial intelligence in German-speaking countries therefore work more to support the work of people. One idea from AIT, for example, is to provide content with a nutritional table – similar to what happens with food. The Correctiv research center is also working in collaboration with the Ruhr University of Bochum and the Technical University of Dortmund on a project in which AI is intended to support participants in classifying information.

Platforms work with a mix of measures

“The social media platforms work with a mix of different technologies,” says Dengel from DFKI. “It also varies from platform to platform. Above all, there is a lot of experimentation.” Meta, Twitter and TikTok have a great interest in making progress with the development of AI – among other things, their use could also save numerous jobs.

However, if there is AI on social media platforms that detects disinformation without a human double-checking the decision, the methods would have to be very precise. “And that can probably only be achieved if the scope of action is severely restricted,” says Schindler from AIT. This means that disinformation would primarily be recognized when it is a clear case. More subtle cases would slip through. Whether and how this technology is used is unclear from the outside.

To ensure that social media platforms have to publish more information in the future and thus become more transparent, the EU has passed the Digital Services Act. The decree contains numerous innovations. From February 2024, for example, platforms will have to inform users why they have blocked or deleted their content. If the platform operators do not adhere to these requirements, they may face fines.

Definition of disinformation essential

But there could be difficulties with this part. “The system classifies something as disinformation, but it cannot explain why,” says Dengel, managing director of DFKI. “Perhaps the data used to train the AI ​​was distorted or incomplete, or the examples were not balanced. Systems that explain or justify its decision are currently being intensively researched.”

However, this could be a big problem for the future of AI against disinformation. Because content cannot simply be deleted or its visibility restricted – especially not without further justification.

According to experts, there is ultimately no way around artificial intelligence. But: “The assumption that AI is the salvation – that is simply wrong,” says Schindler from AIT. Ultimately, AI is only as good at detecting disinformation as the policies it receives. They cannot do the most important work, namely defining disinformation, for the developers.

source site