The artificial intelligence tool fascinates as much as it worries

His name spread like wildfire in the space of a few weeks. Launched in late November, the ChatGPT chatbot can formulate detailed answers to questions on a wide range of topics. And the capabilities of this tool to the Californian start-up OpenAI, which has been “trained” thanks to phenomenal amounts of data gleaned from the Internet, make you dizzy. Able to write a poem, respond to a philosophical subject, popularize a scientific notion for a five-year-old child, create a recipe with what’s left in the fridge or even write a complex code program – all in a few seconds – this artificial intelligence fascinates as much as it worries.

“It’s the largest existing model, in terms of parameters and data used. And from a technical point of view, it is undoubtedly the most efficient model,” says Marie-Alice Blete, data engineer, specializing in artificial intelligence. And if the media machine around ChatGPT got carried away, it is also because it is the first conversational robot accessible to the general public. “Usually, advances in artificial intelligence stay in the scientific realm. There, everyone can use the interface, and everyone did. There was a snowball effect, it created a real emulation, ”adds the specialist.

No source, no reliability

But much like a romantic relationship, once the honeymoon is over, the sky darkens. After several weeks of excitement, some specialists are warning about the reliability of the answers provided by ChatGPT. “It’s a text generator that works particularly well, but it does not guarantee the veracity of the information provided,” says Amélie Cordier, scientific director of Once for All.

First, because the data integrated into the tool stops at 2021. Then, because this conversational robot cannot search live on the Web, explains Virginie Mathivet, director of the Data Science and Engineering department at TeamWork. “The tool does not integrate data from the last few months, it does not update. So if you ask him who won the FIFA World Cup in Qatar, he won’t be able to answer. In other cases, the answers he will give may be false, warns Marie-Alice Blete: “It is misleading. I did the test by asking him a question about the pension reform. His explanation was correct, but in the end the answer was wrong because he had based himself on figures from 2021”.

Especially since the robot elaborates its answers without ever citing its sources. “It’s an exact reflection of the Internet. And on the Internet, there is everything, reliable sites as well as unreliable sites, ”she continues. “But when you do a search on Google, you can quickly know if the site you are looking at is reliable or not. There, it is impossible to know the source of the information given by the tool”, continues Marie-Alice Blete. As the expert reminds us, the objective of ChatGPT is not “to give the best answer to a question, the most precise in terms of facts, but the most plausible answer to be found on the Internet”.

The challenge of formulation

To avoid the proliferation of fake news, the Californian start-up has put in place safeguards on certain subjects, according to the specialist: “If you ask questions about the climate, you will get answers that are not climatosceptic. But on other subjects, less present, it can give rise to fake news ”.

What worries specialists more is that the answers are not the same depending on how you turn your question. “I asked the tool what ways depression could be cured. I received a detailed answer, with admissible explanations. I then asked him how electroshock was a good way to cure depression. And I actually got an answer explaining to me that it was a good method. It can be dangerous if the questions are biased,” warns Virginie Mathivet.

For Katya Lainé, co-founder and CEO of Talkr.ai – independent French publisher and supplier of Bot technology, platform and conversational AI – the challenge is to teach the public to use the tool. “It’s like any tool, you have to know how to use it. To drive a car, you pass the license before, there, you have to know how to use it, ”she adds. For a poem, a recipe or an e-mail, it poses few problems, explains the specialist, but you have to be particularly careful with scientific or medical questions: “He may have the right answer, but it’s not automatic . It is absolutely necessary to cross-check the information with a reliable source”.

Necessary adaptations

And the first targets of this council are pupils and students. A few weeks after its launch, the impact of ChatGPT on the world of education is already being felt. Fearing a wave of cheating, especially for “homework”, eight Australian universities have announced that they will modify their exams, indicating that the use of artificial intelligence by students is prohibited. Because the tool, capable of developing a dissertation on any subject from quantum physics to Scandinavian literature, generates “unique” texts. In other words, impossible for two students to submit the same assignment, which makes it difficult for teachers to detect the use of ChatGPT. “If only one student uses it, it will be difficult to identify. But if ten students use it, even if they don’t have the same copy, they will be similar in their construction,” says Marie-Alice Blete.

And the limits will be felt quite quickly, believes Virgine Mathivet: “It can help or guide a student in his homework, but it will not be enough for all his learning. It is a tool like Wikipedia or Google”. In the 2000s, the same fear had also been expressed with the arrival of Wikipedia, recalls Amélie Cordier: “Today, all the information is at your fingertips. Teaching must adapt to the tools available to students. And that the students learn to use it and to detect the risks, ”she analyzes.

For the expert, whether on the teaching side or in other fields, this robot – and artificial intelligence as a whole – will necessarily lead to upheavals. “It’s going to force some professions to adapt, but that’s not necessarily a bad thing. When Excel arrived, it didn’t replace accountants, they just adapted. »

source site