Elon Musk and hundreds of experts call for a pause in the face of ‘risks to humanity’

They sound the alarm. While the machine is able to generate photorealistic images, pass the American bar exam or enter medicine, Elon Musk and hundreds of world experts on Wednesday signed a call for a six-month pause in research. on artificial intelligences more powerful than GPT 4, the OpenAI model launched in mid-March, citing “major risks for humanity”.

In this petition published at futureoflife.orgthey call for a moratorium until security systems are in place, including new dedicated regulatory authorities, oversight of AI systems, techniques to help distinguish the real from the artificial, and institutions capable of managing the “dramatic economic and political disruption (especially to democracy) that AI will cause”.

“manipulative potential”

The petition brings together figures who have already publicly expressed their fears of out-of-control AI that would surpass humans, including Elon Musk, owner of Twitter and founder of SpaceX and Tesla, and Yuval Noah Harari, the author of “Sapiens”. .

Also a signatory, Yoshua Bengio, Canadian pioneer of AI, expressed his concerns during a virtual press conference in Montreal: “I don’t think society is ready to face this power, the potential manipulation, for example, of populations which could endanger democracies”.

“We must therefore take the time to slow down this commercial race which is underway”, he added, calling for discussion of these issues at the global level, “as we have done for energy and nuclear weapons. “.

Researchers are divided on recent progress. Some believe that GPT-4 shows snippets of what could lead to a “strong” artificial intelligence, capable of learning like humans do. Others, like the French pioneer Yann Lecun, are more reserved.

“Society needs time to adapt”

Sam Altman, boss of OpenAI, designer of chatGPT, himself admitted to being “a little scared” by his creation if it was used for “large-scale disinformation or cyberattacks”. “Society needs time to adjust,” he told ABCNews in mid-March. But he did not sign this moratorium.

“The past few months have seen AI labs lock themselves in an uncontrolled race to develop and deploy ever more powerful digital brains, which no one – not even their creators – can reliably understand, predict or control.” -they.

“Should we let the machines flood our information channels with propaganda and lies? Should we automate all jobs, including rewarding ones? Do we need to develop non-human minds that could one day be more numerous, smarter, more obsolete and replace us? Should we risk losing control of our civilization? These decisions should not be delegated to unelected technology leaders.

The signatories also include Apple co-founder Steve Wozniak, members of Google’s DeepMind AI lab, OpenAI competitor Stability AI boss Emad Mostaque, as well as AI experts and American academics, Microsoft executive engineers. , an OpenAI ally group.

source site