“AI is used by extremist groups to spread anti-Semitic ideas”

In January 2023, a video hijacking the image of actress Emma Watson, known for having played Hermione in the saga Harry Potter, goes viral. The young woman seems to read there My Kampf, book written by Adolf Hitler. The video is a fake, one of the examples showing how artificial intelligence tools are misused to spread hatred online.

Unesco is alarmed by this example in a report published this Tuesday and that 20 minutes was able to consult in preview. The institution’s experts fear the threats that artificial intelligence can play on the memory of the Shoah and the role that these tools can play in the spread of anti-Semitism. Karel Fracapane, responsible at UNESCO for the fight against hate speech, answers questions from 20 minutes on this subject.

UNESCO is alarmed that certain uses of artificial intelligence threaten our collective ability to recognize the facts about the Shoah, as well as our ability to distinguish truth from falsehood…

It corresponds to certain aspects of reality. Artificial intelligence tools are sometimes used by extremist groups to spread ideas that are anti-Semitic. And, like any form of information and knowledge production, there are biases relating to artificial intelligence: it will collect information which itself contains human biases. Artificial intelligence, not necessarily voluntarily, will produce and promote extremist, anti-Semitic, racist and Holocaust denial content.

Then there is a threat to the content, accuracy and complexity of this story. There is also the risk of simplification of the story, that is to say that the sources on which the artificial intelligence tools will work are not necessarily reliable sources. The tools will produce limited and, sometimes, manipulated knowledge.

Indeed, one of the aspects highlighted in the report is the opacity of the sources of certain conversational chatbots. There is a risk that they will look for data on denialist sites and use it in their answers…

What are the report’s recommendations?

One of the first is the question of transparency and the operationalization of these tools. We don’t necessarily know how it works, based on what sources – which obviously leaves room for any form of manipulation. There are still safeguards in place to avoid offensive content, dangerous content, etc. But since we don’t necessarily know very well how it works, extremist groups can also implement strategies to circumvent these safeguards. This means, for example, feeding as many negationist sources as possible from which artificial intelligence models will draw. There are also strategies for jailbreaking : we will continuously ask offensive, extremist questions, etc. to the artificial intelligence system who will be forced, to respond to you, to seek answers to create extremist and anti-Semitic content.

You insist in the report on the need to raise awareness among the general public about how these tools work. For what ?

It’s fundamental. This is the question of critical thinking. Sometimes it is almost impossible to distinguish between information that is correct and information that is hallucinated. [inventée] by the system. Other times, it’s impossible to distinguish between a deepfake photo and a stock photo.

The objective is the development of critical thinking among the general public. And that starts at school. These elements must be taken into consideration in education systems to protect students from hate speech of this nature, disinformation, conspiracy theories and other vehicles of racist or anti-Semitic ideas.

Google Bard who invents testimonies from Holocaust survivors. GPT cat who invents the concept of the Shoah by drowning. Do these examples cited in the report not discredit any use of these tools?

Yes, for the most part. When the tool lacks sources, it fills the gaps: it produces false information on its own. [un phénomène connu sous le nom d’hallucination].

How did this hallucination of the Holocaust by drowning come about?

When you research certain items, the answers also depend on linguistic spaces. If you search in English or French [sur la Shoah], you will have many more sources than if you search in Chinese or Russian. If you do a search in Chinese for the term holocaust, you will find mainly death metal bands. If you do research on the Shoah in Russian, there will be a distinct lack of information, of sources, from which the tools will draw. And there you will not have an answer. In certain cases, depending on the nature of the questions, they will recreate, they will invent information. That is the problem.

Artificial intelligence works a bit like an oracle. We will give you an answer which is presented as having authority. You go to ChatGPT, you type in your query and you have text. How can we effectively distinguish truth from falsehood, understand the range of sources that may lie behind it, maintain a critical mind, develop multiple perspectives on particular information? This is particularly serious because we are talking about the history of a genocide which is being manipulated today by extremist groups to spread anti-Semitism.

Artificial intelligence is also used to create fake videos, like this deepfake of actress Emma Watson reading “Mein Kampf”…

Here, you really have a case of using artificial intelligence for the purposes of spreading anti-Semitic hate speech. You take an ultra-famous figure like Emma Watson, and if you want to reach and radicalize young people by broadcasting this kind of discourse, that’s exactly what you can do relatively easily. You obviously find this on digital platforms which are not necessarily moderated and it can reach millions of users.

Conversely, it is possible to use these same tools to produce quality educational content.

Yes, UNESCO has digitized the testimony of a survivor and we can chat with her…

After observing the extent to which the distortion and denial of the Shoah was disseminated on social networks, we decided to develop educational tools which could then be made available to users, this time using artificial intelligence for educational purposes. , and in order to produce interactive content that is attractive to young people. This is an immersive interaction with a survivor of the Theresienstadt ghetto (Czech Republic) which was produced as part of a partnership between UNESCO, Meta, Storyfile, the World Jewish Congress and the Claims Conference.

This is part of the educational experiences that can be developed based on these new tools. If we want to counter the harmful use of artificial intelligence by extremist groups, if we want to try to limit algorithmic biases relating to AI, we must also develop materials that can enable a counter-discourse. This involves in particular the use of artificial intelligence in research, in the digitalization of archives, in order to make more sources available to artificial intelligence research systems. In the same way, we must try to develop new teaching methods that can use this technology.

source site