Web column: Who wrote it? – Culture

The CNet website used to be an institution. This is where people went to get the latest opinion on computer issues. Which new gadget should one buy, which system promises the maximum gain in distinction? Because you thought you were at the forefront of the future anyway – and you can also save money – the editors began publishing texts written with the help of AI programs last fall. Rather bashfully, however, there was nothing more than a small hint on the website.

Almost 80 articles were created in this way. And like the tech portal Futurism reported, the computer-generated texts not only turned out to be riddled with errors – the AI ​​also shamelessly copied from competing news sites. Weeks after the experiment was blown, CNet Editor-in-Chief Connie Guglielmo spoke up a bit contritely. Well, she wrote, the use of AI will be suspended for the time being, but we will continue to work on it. Also the entertainment platform buzz feed announced a few days later that editorial content would be created by AI in the future. The company’s stock price doubled after the announcement.

The GPT Zero program has grown almost as rapidly as Chat GPT

Since the AI ​​group Open AI published its chat GPT model in early December, educators and media critics have been unable to keep up with their concerns. The CNet example shows that what is currently being discussed theoretically in most media has long been in practice in practice. Text at the push of a button. The ulterior motive was harmless, as it consisted only of making a few bucks through advertisements and affiliate links. The situation is different with AI-generated schoolwork or fake news.

So, do teachers or critical readers continue to rely solely on their intuition to discern who has drawn the pen? Not quite: The GPT Zero program has almost as rapid a rise as Chat GPT itself. Here you can check texts to see whether they were written by an AI or not. Similar promises are made by websites such as detector.dng.ai or openai-openai-detector.hf.space.

To reach a verdict, GPT Zero examines texts based on what 22-year-old developer Edward Tian calls it, perplexity and burstiness. The former measures the complexity of the text. If the text is familiar to the bot – because it was trained on such data – then it has low complexity and is therefore more likely to be generated by an AI. The second criterion compares the variations within the text. People tend to write with greater explosiveness, alternating longer and more complex sentences with shorter ones. The AI, on the other hand, tends towards uniformity.

“Automatically generated texts are something we have to adapt to, and that’s a good thing.”

However, 100% accuracy cannot be achieved with all of these tools. Elsewhere, the use of so-called radioactive data is therefore being considered. With images generated by AI, researchers at Facebook’s parent company Meta have already shown that the computer concoctions can be identified as such if they are trained with “radioactive data”, i.e. with images that have been subtly altered to slightly distort the training process. Like a Geiger counter, this detection is possible even when only one percent of a model’s training data is radioactive, and even when the model’s visual results look practically like normal images. This technique could also be applied to text models like Chat GPT.

“We are now in a new world,” said Open AI boss Sam Altman recently in an interview, when asked about the concerns about the AI ​​​​text flood. “Automatically generated texts are something we have to adapt to, and that’s a good thing.” Unfortunately, Altman did not say exactly what such an adaptation could look like.

That’s a traditional answer from Silicon Valley: you don’t want to worry too much about the social implications of your own creation, so others have to take care of that. What was the old Facebook motto? Move fast and break things. Who cares if something falls by the wayside.

source site