One year after ChatGPT, a frantic race and two clashing visions

From our correspondent in California,

“Can machines think? » The question, posed by the father of computing Alan Turing in 1950, still has no definitive answer. But while the general public discovered the progress of artificial intelligence with the sensational launch of ChatGPT just a year ago, its “parent” start-up, OpenAI, almost fell apart in mid-November. Suddenly fired by the board of directors, his boss, Sam Altman, made his comeback after a revolt from employees. The American media evoke divisions between two factions.

On the one hand, the techno-optimistic “boomers”, wishing to develop as quickly as possible an AGI (Artificial General Intelligence), a “general” AI equal to that of humans. On the other, the “doomers”, frightened by the apocalyptic risks of a super-intelligence with interests potentially not aligned with ours. Faced with this quasi-religious ideological struggle, public authorities seem to have difficulty deciding how to regulate a galloping technology which could have consequences as profound on our society as the industrial revolution. Or the Manhattan Project.

A step towards general AI?

Two weeks after the battle that shook Silicon Valley, few details have filtered out. According to the Reuters agency, before attempting to dismiss Altman by accusing him of “not always having been frank in his communications”, the board received a letter from OpenAI researchers warning it of a major breakthrough which could, according to Reuters, threaten humanity. The project, called Q* (“Q-star”), would be able to solve certain mathematical problems more complex than GPT-4, the latest language model launched by OpenAI in the spring. Without going so far as to assert that the Holy Grail of an AGI has been achieved, this letter could suggest that a milestone has been reached by Ilya Sutskever’s teams.

If Sam Altman has become the face of AI, Sutskever, co-founder of OpenAI and its chief scientist, is one of the most brilliant minds in the discipline. His work at Google helped create, in 2017, the architecture of deep learning (deep learning) Transformer (the “T” in ChatGPT), on which all current generative AI models are based. Instead of milling text letter by letter in sequential order, the Transformer architecture allows multiple words to be analyzed in parallel, for increased power.

During a Ted talk recorded a few weeks before the attempted coup at OpenAI, Sutskever praised the potential of an AGI – but also the risks – ensuring that he was working on solutions “so that an AI never rebels”. After the announcement of Sam Altman’s return, Sutskever apologized on X, writing: “I deeply regret my participation in the board’s actions. It was never my intention to harm OpenAI. »

These internal divisions are not new. First launched in 2015 as a non-profit research lab to ensure the development of AI “beneficial to humanity”, the start-up and Sam Altman gradually gave in to the sirens of capitalism. After the departure of Elon Musk – one of the co-founders – OpenAI created a subsidiary in 2019 allowing it to raise more funds and market its products. But the board remained the guarantor of a “capped” profit goal and technological prudence. Supported by 90% of employees, who threatened to leave at Microsoft, Sam Altman won the battle and returned as a hero, with a new board of directors being formed.

Experts divided on risks

This gap is found among the greatest experts in the sector. Upon retiring from Google in the spring, Geoffrey Hinton, one of the fathers of AI and former mentor of Ilya Sutskever, toured TV sets to sound the alarm about the risk that humans would become “ the second most intelligent species on the planet. According to him, even though the Large Language Models (LLM) on which generative AIs are based technically only guess the most likely next word, they have achieved a true understanding of language, and are capable of reasoning and learning from their experiences. For Hinton, it is only a matter of time and power before these systems become conscious.

Frenchman Yann LeCun, pioneer of neural networks and head of AI at Meta, is not convinced. On the stage of the World Science Festival, he assured this week that AI would forever remain “a tool”, insisting on the fact that “the desire for domination” is not, according to him, “linked to intelligence “.

Why so much divergence between these two scientists co-winners of the Turing award (the Nobel of computer science) in 2018? “These models are massive and we don’t fully understand how they work or how they might evolve,” explains 20 minutes Wael Abd-Almageeddirector of research at the Information Sciences Institute (ISI) at USC University in Los Angeles.

LLMs are not completely black boxes: researchers control the data passed through the mill (hundreds of billions of texts from the Web, encyclopedias and books) and the learning rules. But after 100 days of training, a system like GPT-4 sometimes surprises its creators with unanticipated “emerging” features, such as the ability to solve certain math problems or guess the title of a movie from a series of emojis. There can also be unpleasant surprises, such as the appearance of bias or “hallucinations”, when the AI ​​invents anything while being convinced that it is right.

Two visions of the future (illustration created with GPT-4 and Dall-e 3). -OpenAI

Even though it performs well on some standardized tests, and is capable of passing medical school or the bar, GPT-4 still makes trivial arithmetic errors. Its understanding of language, sometimes astonishing, seems only superficial: according to our tests, OpenAI’s AI is terrible at Father Fourras’s riddles and has a lot of difficulty with spoonerisms. But Geoffrey Hinton insists: it’s like with students who make mistakes, that doesn’t mean they aren’t intelligent. It’s all about progress. And those of AI are dazzling.

Regulations that are being sought

In the world, a race for IArmament is underway, with increasingly powerful models. Microsoft has invested in OpenAI. Google (Palm 2) and Meta (Llama 2) responded to GPT-4. Elon Musk and xAI launched Grok. OpenAI alumni founded Anthropic, a public-benefit corporation, with its Claude 2 chatbot which prioritizes security. France is not left out with the new nugget Mistral AI, created by three former employees of Deepmind (Google) and Meta, which relies on an open-source model, Mistral 7B, which anyone can download for free. The German start-up Aleph Alpha has just raised $500 million.

Faced with this exponential progression of AI, public authorities are struggling to position themselves. A fine tactician, Sam Altman pleaded before the American Congress for more regulations, without committing too much. A few months ago, he threatened Brussels to boycott the European market if the text intended to regulate artificial intelligence (AI Act) was passed. After a turnaround from Paris, Rome and Berlin – who fear penalizing European efforts in AI – we seem to be moving towards a much less restrictive law focusing on a voluntary code of conduct for companies designing generative AI models.

“Big Tech continues to launch these models to make money without truly studying the potentially disastrous consequences on disinformation, democracy, financial markets or the safety of children,” denounces Wael Abd-Almageed. However, he offers a glimmer of hope: with his colleagues from the ISI, he has just publish research showing that creating synthetic videos that are “completely undetectable is theoretically impossible.” “We will always be able to detect deepfakes,” he assures. Humans have not said their last word.

source site