OpenAI co-founder wants to develop “super intelligence” | tagesschau.de

Status: 20.06.2024 10:45 a.m.

The former head of research at OpenAI is founding a new company – and announces that he wants to develop a “safe superintelligence”. There had previously been repeated criticism due to the possible risks of artificial intelligence.

One of the co-founders of ChatGPT inventor OpenAI is starting his own AI start-up. He announced this yesterday. The goal of the new company, called Safe Superintelligence, is to create a safe, highly developed artificial intelligence, announced Ilya Sutskever.

“What’s special about the company is that its first product will be secure superintelligence – and it won’t release anything else before that,” Sutskever told Finanzdienst BloombergThis will help avoid commercial pressure and a race with other AI labs, the researcher argued.

Leading minds in AI development

Sutskever is considered one of the leading minds in the development of artificial intelligence and was previously head of research at OpenAI. He has two co-founders at Safe Superintelligence: Daniel Gross once worked on artificial intelligence at Apple, Daniel Levy previously worked with Sutskever at OpenAI and trained AI models there. Sutskever is said to be the chief developer, but sees himself primarily as responsible for “revolutionary breakthroughs”.

Last year, Sutskever was involved in the surprise dismissal of OpenAI CEO Sam Altman, which was reversed just days later after pressure from employees and major investor Microsoft. Sutskever then remained in the background and left the ChatGPT developer in May.

How does artificial intelligence become safe?

The question of whether AI systems could become dangerous for humanity once they become more powerful and independent has been a concern for the industry for years. There are repeated warnings from experts and attempts by governments to minimize risks through strict regulations and reporting requirements.

How the new company specifically plans to develop secure AI remains vague. According to the announcement, security will be achieved with technical breakthroughs that are built into the AI ​​system rather than added afterwards.

Much remains unclear

The vision for the new company is something like a return to OpenAI’s roots. The initial idea was to create a non-profit research laboratory for the development of artificial intelligence.

However, a few years after its founding, OpenAI came to the conclusion that it could not survive without a commercial product. This led to a multi-billion dollar pact with software giant Microsoft and the release of ChatGPT.

How Sutskever’s new superintelligence laboratory will be financed has so far remained unclear. Investors will be interested to see how financing could pay off. Ultimately, it remains to be seen whether a product will ultimately be profitable.

Superintelligence and General Artificial Intelligence

When it comes to developing artificial intelligence, the competition around OpenAI or Google now has a head start. Superintelligence is generally referred to as an AI system that could be equal to or even superior to humans. The term Artificial General Intelligence (AGI) is often used as a synonym. However, scientists and companies have different interpretations of how exactly this AGI is defined.

OpenAI CEO Altman wrote in a blog last year that the company’s mission was to develop an AGI that would benefit all of humanity. General artificial intelligence has the potential to give “everyone incredible new abilities.” However, there is also the risk of misuse, drastic accidents and social disruption. Recently, there has been repeated criticism of OpenAI’s handling of the potential risks of artificial intelligence.

source site