disbandment of the team responsible for the security of a potential artificial super intelligence

After the departure of its two managers, the OpenAI team responsible for the security of a potential super artificial intelligence (AI) was disbanded and its members integrated into other research groups in the company. OpenAI confirmed this information on Friday, May 17, to Agence France-Presse (AFP), specifying that the change in structure had started several weeks ago. The inventor of ChatGPT ensures that researchers responsible for the safety of AI models will thus be able to work in concert with the engineers who develop these models.

But the two managers of the former team have just left the star Silicon Valley company. Jan Leike explained on Friday on X that he was resigning because of fundamental disagreements with management over the company’s priorities: innovation or security. “We have reached a breaking point”indicated the engineer who headed the team responsible for “superalignment”, that is to say, ensuring that a future “general” AI, as intelligent as humans, is aligned with our values.

Sam Altman, the co-founder and boss of the company based in San Francisco (California), said “very sad to see [Jan Leike] leave “. “He’s right, we still have a lot to do [pour la recherche sur l’alignement et la sécurité]and we are determined to do it”he added, promising a longer message soon.

Also read the story: Article reserved for our subscribers OpenAI: the sources of the Sam Altman affair

The “superalignment” team was also led by Ilya Sutskever, co-founder of OpenAI, who announced his departure on Tuesday. On X, he said to himself “convinced that OpenAI will build “general” artificial intelligence that is both safe and beneficial”. Former chief scientist of the company, he sat on the board of directors which voted to dismiss Sam Altman in November 2023, before reversing course.

“Act with gravity”

With ChatGPT, OpenAI launched the generative AI revolution (production of content on a simple request in everyday language), which excites Silicon Valley but also worries many observers and regulators, from California to Washington and Brussels. Especially when Sam Altman talks about creating a “general” AI, that is to say with cognitive capacities equivalent to those of humans.

OpenAI presented, on May 13, a new version of ChatGPT which can now hold oral and fluid conversations with its users, a further step towards ever more personal and efficient AI assistants.

Read also | Article reserved for our subscribers OpenAI: “Sam Altman’s pharaonic ambition”

Jan Leike called on all OpenAI employees on Friday to “act with gravity” what justifies what they are building. “I think we should spend a lot more time preparing for the next generations of models, on security, controls, safety, cybersecurity, alignment with our values, data privacy, societal impact and other topics »he listed.

The World with AFP

Reuse this content

source site