Should we fear a “Terminator” scenario with the rapid development of AI?

“Should we take the risk of losing control of our civilization? This is one of the questions asked by hundreds of experts and influencers in the field of Artificial Intelligence (AI). Among them, the CEO of Testla, SpaceX and Twitter Elon Musk, the co-founder of Apple Steve Wozniak, Jaan Tallinn, co-founder of Skype… All ask the same thing in an open letter published on Tuesday March 28. That “AI labs [interrompent] immediately, for at least 6 months, the formation of more powerful AI systems than GPT-4”. This latest AI system, available since March 14, has already been talked about a lot, and its improved version, the GPT-5, is already in development.

“The principle of today’s AI is to be able to predict what you are going to say before you even say it”, explains to 20 minutes, Colin de la Higuera, teacher-researcher in artificial intelligence and holder of a Unesco Chair mixing education and AI. And while a few years or even months ago this technology could only predict a few words, today it can predict whole sentences.

The “old demons” of science fiction

But a scenario terminator, where a self-sufficient artificial intelligence would decide to kill and hunt humanity is it possible? To understand this fear, let’s take up an old theory that frightens more than one, that of singularity. “He’s an old science fiction demon, which seems to be coming back with the current debate. The singularity is the idea that one day we would have an artificial intelligence superior to man, which would be unstoppable, which would develop on its own and that one day it would no longer need us. “, explains Colin de la Higuera.

But today, according to this researcher, also a co-signer of the open letter mentioned, the risk of an AI destroying our civilization on its own does not exist. A position shared by Karine Deschinkel, university professor in computer science and director of the computer laboratory at the FEMTO-ST institute. “We will always have control of this software,” reassures the researcher. The risk is not there for these experts, but rather the speed, without safeguards, at which these intelligences are developed. “I have the impression that we are taking very important wagons off the train for AI research, image Colin de la Higuera. Today, researchers no longer have the time to analyze the technology and the societal impact of these advances. »

So, Colin de la Higuera, like the other co-signatory experts on the platform, calls for a slowdown. But why ? The major problem for these experts is that today, no ethical framework exists. A few unofficial rules have been written by the Future of Life Institute, inspired by Isaac Asimov’s famous “Three Laws of Robotics”. But nothing official. “But to write an ethical framework, we need that time and we are behind schedule. Nobody knows they are good ideas. And yet, every week we have problems, ”says Colin de la Higuera.

An unstoppable race

But for Karine Deschinkel, “today, the race for AI is incoercible”: “If some stop, others will continue. With the financial stakes behind it, it is unrealistic and utopian to stop research,” explains the researcher.

And to help them with this task, few actions and frameworks are put in place by the French public authorities, as well as international ones. And the dangers are there, far from a scenario with Arnold Schwarzenegger as the protagonist, but rather with “thugs” who could misuse technology.

But not everything is to be thrown away, far from it. Today, AI is present everywhere and comes to bring significant help, especially in the field of health. For example, “there is an artificial intelligence that estimates the number of interventions by the emergency services in the hours to come by retrieving all the latest weather data, floods, words typed on Google, on social networks, events popular to come, these data made them learn and after that he manages to predict”, says Karine Deschinkel. “We must not be afraid of AI”, also reassures Colin de la Higuera, before concluding: “What is needed is to succeed in giving time to a democracy”.

source site