Why creators of artificial intelligence are warning about it – and what they are demanding

“Preventing extinction through AI”: Why creators of artificial intelligence are warning – and what they are demanding

They don’t know when, they don’t know which artificial intelligence – but the fear that AI will harm humanity is great.

© grandeduc / Getty Images

The second fire letter within a few weeks: The concern that artificial intelligence could cause great harm to humanity is growing. Industry leaders issued a brief but serious warning.

“Preventing the risk of AI extermination should be a global priority, alongside other societal risks such as pandemics or nuclear war.” Leading figures in the industry published this short and quite concise statement on the website of the “Center for AI Safety“. The signatories include the “forefather of AI” Geoffrey Hinton (“forefather of AI” regrets his life’s work), OpenAI boss and ChatGPT creator Sam Altman, numerous Google employees, the Microsoft CTO and other experts.

It is already the second public fire letter that leading figures in the industry have sent out to the world. Previously, Elon Musk and Apple co-founder Steve Wozniak, among others, had signed a detailed demand that rules and tight controls are needed as soon as possible when it comes to the use of artificial intelligence in important areas of life (immediate break in development demanded ).

An open warning – to unite all leading minds

Opposite the “New York TimesDan Hendrycks, Managing Director of the Center for AI Safety, explained why the warning was kept so short and imprecise this time but usually only in a small circle.In order to cover as many worries and needs as possible, but also to avoid disagreement on individual questions, the statement was made so openly.

“We didn’t want to push for a very large selection of 30 possible measures,” Hendrycks told the paper. “When that happens, it waters down the message.” It continues: “There is a common misconception, even in the AI ​​community, that there are only a handful of doomsayers. In fact, many people privately express concerns about these things.”

The worries and fears, however, remain manifold. While the topic of the hour is probably the dilution of the truth of texts and images, which recently became all too clear with a fake picture of the Pope (Why we can no longer believe our eyes), experts in particular think bigger and further.

The greatest concern for many is the rapid development of technology. What seemed unthinkable just a few years or even months ago can now be done with just a few clicks. Some fear that this could end in a so-called “AGI” in the not too distant future. “AGI” stands for “Artificial General Intelligence”, which means something like a human-like intelligence. Such an AI would be able to understand any task and even outperform the human in it.

Experts, above all OpenAI founder Sam Altman, are constantly calling for action to be taken as soon as possible before it’s too late. in one blog post one of the currently most important people in the industry makes concrete suggestions: Altman calls for close cooperation between all companies developing AI and considers an international and independent control body to be indispensable, similar to the one that exists for nuclear weapons.

Government or corporate regulation critical

However, the current statement does not contain such a requirement. That’s probably what Hendrycks meant with the open formulation. Because others again put their criticism of the development of AI at a completely different point. For example, former Google researcher Meredith Whittaker (What We Should Really Be Afraid Of). Whittaker already thinks it is dangerous that the development of AI and the widely used products are already in the hands of a few corporations – and therefore only a small group of people have the corresponding amount of power.

Some governments have also recognized this, which, out of caution, have temporarily blocked entire AI projects and sent the operators demands in terms of data protection. In Italy, ChatGPT was unavailable for about a month. Meanwhile, Google’s chatbot Bard is not available throughout the EU because Google wants to prevent similar hurdles.

However, the responsibility for AI cannot be placed in the hands of governments either – because what can be regulated in the EU or the USA can easily be developed in other countries, perhaps even with bad intentions. The development of artificial intelligence can therefore not depend on individual actors who control themselves but not others.

Many are therefore critical of such an approach, but also that of an international supervisory authority, and fear that the development of AI can no longer be stopped – regardless of which direction it goes.

Also read:

How to recognize AI generated images

“What if it’s smarter than us?”: The “father of AI” regrets his life’s work – and fears its consequences

source site-5