With AI and deepfakes, “criminals have a lot of things at their disposal,” recognizes General Perrot

In the space of a year, generative artificial intelligence has shown what it is capable of. ChatGPT can write a philosophy thesis in minutes, Dall-E and Midjourney generate ultra-realistic fake images faster than their shadow. We have seen images of Emmanuel Macron as a garbage collector, in a Yellow Vest demonstration or as a couture model storm the Web. This new generation of AI is also changing the face of crime and it’s even scarier than an episode of Black Mirror. General Patrick Perrot, coordinator for AI of the National Gendarmerie, spoke at the fourth edition of the Artificial Intelligence Forum last December. He comes back for 20 minutes on crime in the era of generative AI.

During your speech at the AI ​​Forum you pointed out the difficulties of staying the course in relation to the uses of criminals. In which way ?

It is essential to stay the course in relation to the uses of criminals and in relation to the speed of the development of AI among Big Tech. In reality, for the internal security forces (ISF), there is a subject of the fight against crime and a subject of sovereignty. At the speed at which Big Tech is advancing, a number of applications could be developed by these groups and made available on smartphones, but the risk of attacks on individual freedoms is real. Today, Big Tech is one step ahead in computing, data collection and storage capabilities. For internal security forces, the challenge is to be able to move towards open source rather than towards private solutions like that of OpenAI. Otherwise, we could be dependent on Big Tech considering that each citizen gives them a lot of information which enriches their potential sovereignty.

What about criminals?

Open source is also within the reach of criminals. They now find a lot of things at their disposal: computer code on many subjects, such as automatic speech transcription and the creation of deepfakes, for example. They also find ready-made applications. The sophisticated criminal will work on the computer code. The offender of opportunity can use a simple deepfake application to generate a child pornography video with their neighbor’s face. And tomorrow, the police will come to his door at 6 o’clock. Our role is to be able to say, no, that’s not the face of this person in the video. We also work in defense to exonerate the person who is indicted.

How do you go about it?

We developed an exploratory tool, Authentik IA, which received in June 2023 the prize for the project with the greatest potential societal impact in Datacraft Awards. On the deepfake images, we generated 20,000 fake images with different generative adversarial network (GAN) models. When a deepfake has been generated by one of these methods, up to 95% good detection is achieved. If we have a deepfake that was not generated by one of these methods, we drop to 70%. We absolutely need to keep up to date with everything that comes out. As soon as there are new methods, we must enrich our learning models to be more efficient. But, behind it, there is always the human. If we detect a fake, there is a very high probability that it is fake. On the other hand, if we do not detect a fake, that does not mean that it is not one. The presence of a human behind is essential to bring together different elements of clues.

With deepfakes, we can send the video of the president of a company requesting an account transfer of 800,000 euros in his own voice”

Does the tool also work on text and audio?

For audio, we created a database of 10,000 recordings, fake and real. Classification methods are used that demarcate a boundary between audio parameters that have been falsified and parameters that are real audio. This boundary allows, when we are faced with a new audio, to say this one is fake and this one is good. We achieve very good detection levels. We did the same on the text. We generated texts with ChatGPT and Llama 2 and others that came from humans, taken from the Web… We tried to determine which passages were generated by an AI and which were generated by a human. The scores are interesting once we have a significant sample. When we have a text, we are able to see quite redundant sentence structures on ChatGPT or Llama 2. We observe quite similar sentence structures. The vocabulary is more limited than in humans.

What will crime look like in the age of AI?

On deepfakes, our problem is false transfer orders. The performance rate is low but since criminals attack a lot of companies, some of them will be fooled. With deepfakes, you can send the video of the president of a company requesting an account transfer of 800,000 euros in his own voice. It will be difficult for the accountant or the secretary not to make the transfer and tell themselves that it is fake because they will see their president speaking. Today, these frauds operate with simple phone calls while the voice used is that of the criminal. Imagine when they will use the face and voice of a company’s president.

Are we going to see other types of crimes happening that are not linked to deepfakes?

With generative AI of the GPT type, there are applications that make it possible to create phishing campaigns, malware, to carry out cyber attacks. We are monitoring these subjects. We can carry out phishing campaigns, create new malware that would hijack current systems. Malware infests your computer, you no longer have access to your data. There can be blackmail, as we saw with ransomware [rançongiciels] in the hospital. Conspiracy theories are also likely to grow considerably with generative AI. They can generate what is desired very easily.

One of the dangers of artificial intelligence is the end of the human capacity to theorize.

Do you mean that we will be able to invent new plots thanks to AI?

With generative AI, we can generate data with a view to destabilizing States. It’s a real subject. We already know that there will be this type of problem for the American elections of 2024. From conspiracy databases, you tell the AI ​​to generate a conspiracy about the pharmaceutical industry, for example. It will generate a theory that has never been seen before, argued and solid.

Back to the new types of crimes, do you see any others?

We can consider what constitutes adversary attacks. The objective is to inject into the system noise undetectable by humans which distorts the interpretation of the artificial intelligence system. This noise is optimized. These are called adversarial attacks. In front of a photo of a zebra, for example, the system sees a gorilla. In the field of the development of autonomous vehicles tomorrow, you are capable of making a Stop sign look like a 110 km/h sign.

You must keep your free will as a human…

More than ever. The risk of artificial intelligence lies around the subject of education more than security around the ability to retain one’s free will, one’s ability to construct mathematical models independently of observation. Albert Einstein theorized gravitational waves in 1916, we observed them in 2016. AI does the opposite, it observes and, from the observation, it gives not a theory but a result. If we only work on AI, we will end up no longer being able to theorize.

source site