Technology: Researcher describes warning of AI as “PR action”

technology
Researcher describes warning of AI as “PR action”

Sandra Wachter, a researcher at the University of Oxford, has worked on artificial intelligence. photo

© Sandra Wachter/PA Media/dpa

When it comes to artificial intelligence, many people worry about human extinction. According to one expert, this risk is unfounded. Other risks are much more present.

An expert has described warnings of artificial intelligence (AI) wiping out humanity as a “PR campaign”. That said Sandra Wachter from the University of Oxford, according to the British news agency PA. The Austrian-born scientist researches the legal and ethical implications of artificial intelligence, mass data (big data) and robotics in Oxford.

According to the current state of knowledge, the risk of humanity being wiped out by AI is “close to zero,” said Wachter, according to the PA report published last night. It is a “science fiction fantasy” like in the Terminator films, which distracts from the actual dangers and can only become reality in hundreds of years – or not. “There’s nothing useful you can do about it because it’s so far in the future,” Wachter said.

real risks

While there are serious risks from AI, the researcher explained that these are not the ones that are currently receiving all the attention. The real problems are more in the area of ​​prejudice, discrimination and the consequences for the environment.

“I can measure prejudice and discrimination, I can measure the effects on the environment,” said Wachter. For example, it takes around 1.4 million liters of water a day to cool a medium-sized data center.

A number of leading AI experts had previously called for the risks of AI to the continued existence of mankind to be taken seriously. Among those who signed the short statement was the head of ChatGPT inventor OpenAI, Sam Altman.

The chatbot ChatGPT, which can formulate sentences at human level, has triggered a new hype about artificial intelligence in recent months. The nonprofit organization, on whose website the text appeared, cites its use in warfare as a possible danger of artificial intelligence.

dpa

source site-5