Artificial Intelligence: What’s Behind the AI ​​Apocalypse Warning – Economy

It is only mission on the website of the San Francisco-based think tank Center for AI Safety (CAIS): “The risk of annihilation by AI should take priority over other societal risks such as pandemics and nuclear war worldwide.” Means: Artificial intelligence (AI) endangers humanity and it is important to stop it. It seems to be getting serious.

But the interesting thing is not the sentence itself, from Silicon Valley and elsewhere there have been repeated warnings about AI in recent months. The interesting thing is who signed the appeal, namely the CEOs of almost every leading AI company: Sam Altman, Ilya Sutskever, Mira Murati and Greg Brockman of Open AI, the company behind Chat-GPT. The AI ​​chatbot triggered the current hype surrounding the technology. The new chat and image programs can produce speech and images in an impressive way because they have trained with a great deal of computing power using huge amounts of data.

The bosses of the AI ​​companies Anthropic and Stability AI have also signed, their highly developed AIs can also generate text and images. And Demis Hassabis of Google’s subsidiary Deepmind, the company known for its AIs beating human grandmasters in board games and advances in biological simulations. Other high-level signatories are leaders at Microsoft – the group works closely with Open AI – and universities such as Harvard or Stanford. So the creators of the AI ​​warn against their own products. The question arises whether they want to destroy their own business.

Possible concrete dangers that the CAIS warns about on its website are: AI could develop chemical weapons; People become totally dependent on machines – as a result, people’s qualities atrophy; an AI could also strive for power and also collaborate with other AIs against humans.

But how likely are such scenarios? Yann LeCun, AI chief researcher at the meta corporation and luminary in the field, did not sign. For him, such warnings are scaremongering. As long as there isn’t even “dog-level” AI, let alone that of a human, the warning of superhuman AI is premature. Others, like computational linguist Emily Bender, have criticized the apocalypse narrative spread by AI entrepreneurs as a red herring. Proponents of this thesis wanted to focus on a cross-generational fight against an isolated danger. This affects people who are already discriminated against by AI today. This happens, for example, when automated AI systems adopt prejudices and biases from the data sets from which they learn. For example, if the salaries of black people in a dataset are lower than those of other people, the AI ​​could learn from this that black people also earn less should, and act accordingly.

Are they super machines, or are they just some kind of imposter?

So maybe the signers of the letter are not acting in the interests of humanity at all? With their advance, which is about the big picture – the future of mankind – could also serve to choke off the small-scale regulation of their language models in the here and now. During Sam Altman’s visit to Europe last week, it was actually about preventing overly strict rules for the language models in the EU.

And in fact, the insiders who are now warning so loudly have so far failed to explain how today’s AI programs à la Chat-GPT could become a dangerous “superintelligence”. Chat-GPT or Stable Diffusion from Stability AI are based on so-called large language models. They calculate, so to speak, which answer fits a question or instruction that a person enters. And they keep coming up with deductions that seem like logical conclusions to the user.

However, since the technology is a kind of non-transparent “black box” after the training, not even its designers understand why the model spits out exactly this text or that image. Therefore, it remains controversial whether the AIs can really plan and think on their own at a primitive level, or whether they merely deduce from the data what kind of logical thinking a human being can do sounds. Then they wouldn’t be super machines, but just some kind of impostor who mimics people based on the data and tells them what they want to hear but don’t really understand. Therefore, the faction of skeptics also compares the language models with chattering parrots. Fortunately, it is currently not foreseeable that these parrots will suddenly decide to build chemical weapons and wipe out humans with them.


source site