How universities and grandes écoles are reviewing student assessment methods

Wind of concern on higher education. Since the launch of ChatGPT last November by the Californian start-up OpenAI, universities and schools have been navigating in troubled waters. Because from a simple question and in a few minutes, this conversational robot is able to synthesize a book, formulate a dissertation plan, translate a text, produce an effective text… A 20 minuteswe submitted it to a test: “Can you do a comparative study between The Charterhouse of Parma And Madame Bovary ? “. The result is stunning, but the text lacks quotes to support the point. So we made a new request and the robot ran.

The effectiveness of this generative artificial intelligence of content did not escape the students. “As of January, 95% of our students were aware of the existence of ChatGPT and the majority of them had already used it”, relates Stéphane Justeau, professor and director of the institute of advanced pedagogy at Essca, a school of business. “It is now a very common practice among students. We are facing a technological revolution”, confirms Daniel Courivaud, research professor at the Esiee engineering school.

Cases of cheating start to rise

As one might fear, the use of ChatGPT by students is not always virtuous. Example at the University of Strasbourg, where around twenty students who had used the chatbot to obtain the answers to a remote MCQ, were unmasked and had to retake their exam face-to-face. For homework, the risk is also great, especially since if the same request is made several times, the robot delivers slightly different versions.

Fraud that penalizes students in the short and medium term. “Not only can those who get caught be punished. But if they are not exposed, they will get better grades than others who have worked honestly and done less well. It’s unfair, ”said Stéphane Justeau. And according to Virginie Tahar, lecturer in contemporary literature at the Gustave-Eiffel University in Champs-sur-Marne, students could develop very bad habits: “The risk if the software always thinks for them is that it ends up reduce their ability to think. Not to mention their writing skills, which would make it difficult for them to find a job.

Redouble vigilance in correction

Faced with this phenomenon, many establishments have organized working groups. But they are still groping and struggling to issue instructions for their teaching teams. Asked by 20 minutes, the entourage of the Minister of Higher Education indicates that “Sylvie Retailleau’s position is not to ban this type of software, but to support its uses”. And that the minister will not give orders to establishments on how to assess students on the basis of AI. Autonomy requires.

Several start-ups have already designed ChatGPT content detectors, but few higher education establishments have already equipped themselves. “Furthermore, the detectors are not 100% reliable,” observes Stéphane Justeau. For the time being, teachers who correct homework likely to have been done with the help of ChatGPT, therefore rely on their vigilance alone. They scrutinize the change of style in the same assignment, the absence of quotations, the sources, the repetitions, the absence of spelling mistakes, the too flat answers, the particular punctuation, the absence of depth of certain approaches, the contradictions… Clues that may lead them to believe that the robot has been used. “AI does not necessarily give a correct answer, but a plausible one. The texts it provides require proofreading by an expert to be corrected,” emphasizes Stéphane Justeau. “When you submit a request in science, the form is stunning, but some statements are nonsense”, also notes Daniel Courivaud.

Change the nature of homework

“But this race for detection is a bit futile, because ChatGPT has already evolved a lot in recent months,” observes the teacher. Aware that they will not be able to fight AI on equal terms, some establishments believe that the only truly effective solution is to review the methods of student evaluation. For example, by limiting homework. “We will have to multiply the homework on the table or the oral evaluations, even if it means teaching a little less”, estimates Daniel Courivaud. And in the case of work to be done at home, it seems necessary to review the subjects, underlines Julien Pillot, teacher-researcher in economics at Inseec, a business school: “We must ban knowledge checks and descriptive dissertations in favor of case studies or exercises aimed at stimulating reflection and critical thinking. ChatGPT cannot respond to exercises that call for finding original solutions, this still leaves a lot of options. »

Stéphane Justeau agrees: “Students should no longer be evaluated on the simple restitution of content, but on skills. The use of a concept within the framework of a fictitious situation for example. “Virginie Tahar, too, continues to give homework, but with safeguards: “I ask my students to produce creative texts with very formal constraints. However, for the moment, the answers of ChatGPT lack precision with this type of directives. And before, for safety, I run a test to see how the robot reacts to my writing instructions. »

Give more space to the defense of dissertations

But it is on master’s dissertations that teachers’ concerns are greatest, because if some address subjects that have never been explored, this is not the case for all. And if ChatGPT is currently unable to produce a 100-page memoir, it can provide parts of it. “We must give more space to the defense of the memory which allows us to ask specific questions of the student, to ask him to justify his research choices… If he is unable to argue, we will quickly understand where the the problem,” said Stéphane Justeau. “In addition, we meet the student several times during the year to take stock of his research work. If he is not driven by his subject, if the finesse of the analysis is not obvious, we will realize the discrepancy”, adds Virginie Tahar.

Many establishments have also understood that it is more effective to tame the tool, rather than to demonize it. Sciences po has therefore prohibited the use of ChatGPT without explicit mention in the content produced by its students. The school has also decided to integrate a course on AI into all these M1s in the spring of 2024 and to train teachers on the subject. An approach approved by Stéphane Justeau: “Students must be trained in the use of the software, both from a technical and an ethical point of view. And make them work on texts produced by the AI ​​by asking them to take a critical look at them in order to improve them. To this extent, AI could even become an interesting component of teaching, pushing students to surpass themselves. “As long as our students are asked to be smarter than a robot, there won’t be a problem. After all, isn’t the purpose of higher education to train people to solve complex problems? asks Julien Pillot.

source site