USA: Artificial intelligence writes articles – with plenty of errors

Mechanical error devils
Artificial stupidity: US news website has AI write dozens of articles – and makes a fool of itself

Artificial intelligence is no longer new territory in journalism. However, it is at least eyed critically (symbol image)

© Andrey Popov / Imago Images

What should artificial intelligence be by definition? Intelligent of course. However, as an online magazine in the USA involuntarily proved, even the most advanced technology (still) has its limits.

“Investing can be an effective way to make more money from your money.” This is how a recently published advice article on the subject of compound interest on the US news website Cnet begins. Admittedly, the realization in this first sentence is not exactly surprising. Perhaps it is the error devils that have crept in as the text progressed.

The article states, among other things, that with a deposit of 10,000 US dollars and three percent interest after one year, a whopping 10,300 dollars Profit fell off. The author miscalculated by a whopping $10,000. But the author shouldn’t really make any mistakes, after all he’s not a human being, he’s a machine. An artificial intelligence had written the text. However, the reader did not know that at first glance. Only an ominous “Cnet Money” editor is given as the author.

Around 75 articles are said to have been written and published on the news website of the US media company Cnet, a CBS subsidiary, alone or with the help of artificial intelligence since November 2022. Some of them had serious errors in content.

“Editorial Failure”

The well-known news site found out the technology magazine “Futurism”. Its editors proved that an unnamed AI had generated dozens of erroneous articles for Cnet. Cnet then admitted this, but explained that the whole thing should be understood more as an experiment, after all they have “been together for more than two decades […] gained a reputation for testing new technologies,” wrote Editor-in-Chief Connie Guglielmo. They just wanted to find out “if the technology could help our busy reporters and editors in their work to cover topics from a 360-degree perspective,” says Guglielmo.

On Tuesday, Cnet began adding lengthy proofreading notes to some of the articles in question. Dozens of articles on Cnet and its partner site Bankrate now state that “we are currently reviewing this article for accuracy” and that “if we find any errors, we will update and correct it.” If that’s true, “then this is primarily an editorial failure,” Hany Farid, a professor of electrical engineering and computer science at the University of California, Berkeley, told the Washington Post. Nevertheless, not all mistakes have been ironed out, as “Futurism” notes.

Artificial intelligence: not new journalistic territory

Artificial intelligence has become an integral part of our everyday lives: Smartphones use it to recognize our faces, Netflix algorithms recommend films and series that are tailored to our needs, and it even drives us independently from A to B in cars. AI has not been uncharted territory in journalism for a long time more. However, it is viewed extremely critically. For this article, too, the AI ​​would theoretically only need a few seconds – the question of quality being put aside. The chat robot ChatGPT, which sometimes provides enormously complex answers to user questions (), also caused a stir.

The US news agency Associated Press already used AI in 2014 for reports on corporate profits and sports results. However, these articles were much more superficial and mostly limited to putting updated numbers in ready-made framework texts. According to the Washington Post, other bots are used for internal editorial control. An algorithm from the Financial Times, for example, combs through articles to determine whether too many men are quoted in a text.

The AI ​​texts from Cnet, on the other hand, can hardly be distinguished from “handmade” articles at first glance – apart from the fact that they are simply formulated clumsily (regardless of the topic). The editors now refer to the machine participation in the corresponding texts: “This article was supported by an AI engine and checked, checked for errors and edited by our editors.”

Another problem with AI in journalism is the risk of plagiarism. Algorithms are neither creative nor inherently critical. In other words: AI articles lack transfer performance, they only repeat what people have already written. Despite all these concerns, AI is certainly not a no-go in the journalism of the future, but rather a tool. And when it comes to a tool, in the end it always depends on the craftsman. Perhaps you will also find one or two mistakes in this text – despite careful checking. In this case we ask your forgiveness. It happens that we journalists make mistakes – after all, we are not machines.

Sources: “cnet“; “Business Insider“; “Washington Post“; “Futurism

source site-1