Artificial Intelligence: Google Search: Errors in AI overviews make the internet laugh

Artificial intelligence
Google search: Errors in AI overviews make the internet laugh

Google has to endure ridicule online because of errors in AI overviews. Photo

© Andrej Sokolow/dpa

When Google announced that it would add AI summaries to search results in the US, websites worried that fewer users would come. But the overviews have a completely different problem.

Since Since Google has added AI overviews to its search engine across the US, embarrassing and even disturbing errors in the software have been making the rounds on the internet. For example, one user posted a recommendation to use non-toxic glue to attach the cheese to a pizza. According to screenshots, the search engine claimed in the AI ​​responses of others that dogs had played in the NBA and NFL and that Barack Obama was the first Muslim president of the USA.

The summaries created with the help of artificial intelligence – called “AI Overviews” in the USA – are intended to give users a direct answer to their questions more quickly, instead of presenting them with a list of web links. Several start-ups want to use AI answers to challenge Google’s dominance in web searches. But the Internet group is also moving in this direction itself. The search engine has long had short answers to factual questions above the links. But now some of them are texts with several paragraphs.

The function is to be introduced in other countries by the end of the year. Many website operators and media are worried that Google will direct fewer people to them as a result of the AI ​​summaries and that their business will suffer as a result. Google counters that there is actually more traffic to the sources of information that end up in the overviews. How the rest of them are doing, however, remains unclear.

What is the problem?

However, the widespread introduction of the function has now revealed a completely different problem: in many cases, the AI ​​software does not seem to be able to distinguish serious information from jokes or satire. The sources for some particularly silly claims were joke posts on online platforms or articles from the satirical website “The Onion” – such as the claim that geologists recommend eating a small stone a day. A Google spokeswoman told the technology blog “The Verge” on Thursday that the errors were “generally due to very unusual requests and do not correspond to what most people experience”. These “isolated examples” are used to improve the product.

In February, Google had to endure online ridicule for an AI program. The Gemini software generated images of non-white Nazi soldiers and non-white American settlers. Google explained that it had failed to program exceptions for cases in which diversity was definitely out of place. After that, the company stopped allowing Gemini to generate images of people for the time being.

dpa

source site-5