ChatGPT invents a sex scandal – and turns a real professor into the perpetrator

AI bot
ChatGPT invents a sex scandal – and turns a real professor into the perpetrator

Not everything that ChatGPT confidently presents should also be believed (symbol image)

© Sanja Radin / Getty Images


It is well known that ChatGPT likes to invent facts. But now this has very concrete consequences in real life: A professor in the USA was innocently accused of sexual assault.

Writing long texts, explaining complex topics in chat or just chatting: It is more than impressive what the AI ​​behind ChatGPT can write together. Of course, you should be aware that she can be a bit error-prone and invent facts. For Jonathan Turley, a professor in the USA, the inventiveness of AI now has surprising consequences.

“It was pretty scary,” Turley told the Washington Post. As part of a study, a colleague had asked the chatbot about researchers who had previously sexually harassed someone. And then spotted Turley on the list too.

Made up news about real people

According to ChatGPT, Turley has repeatedly made lewd remarks about female students, and during a study trip to Alaska in 2018 he is said to have even tried to grope a female student. The bot cited an article in the Washington Post as the source. The problem with this: This article never existed. Turley had never been accused of inappropriate behavior towards students. And even the trip to Alaska hadn’t taken place.

Turley was appropriately shocked. “Such allegations can be extremely damaging,” he explains his concern. The Metoo movement and the resulting awareness of the problem of sexual assaults in power imbalances has been a huge topic in American universities for years. Possible misconduct or even the suspicion of it can very quickly have very severe consequences.

No control authority

The failure of the AI ​​is therefore particularly problematic. The supposed quote from a reputable newspaper makes the allegations genuine if you don’t bother to check them carefully. And: The AI ​​presents its statements very self-confidently. To an unwary reader they give the impression: That’s how it is.

However, the error could not be easily corrected. Although Turley is a regular in the media and has had to arrange for corrections to stories, this time he didn’t know who to turn to. After all, there was no author who made a mistake. The company behind the bot, OpenAI, had promised with the latest version that ChatGPT should “hallucinate” less, but extensive tests indicate that version 4 of the bot makes even more factual errors than its predecessor.

Accordingly, the professor is not the only one who has to fear for his reputation due to false claims made by the AI. It was only on Wednesday that “Reuters” reported that an Australian mayor was considering suing OpenAI. The AI ​​bot had accused the politician of imprisonment – ​​of all things for bribery.

Sources:Washington Post, Reuters

source site-5