ChatGPT 4: What’s new in AI and why the inventor warns of disappointment

GPT-4
What the new version of the hype AI can do – and why its inventor warns against disappointment

The expectations for the new GPT version were high (icon image)

© gorodenkoff / Getty Images

The new version of ChatGPT is even smarter than its predecessor. But expectations should not be too high.

When OpenAI presented ChatGPT for the first time at the end of last year, it was a breakthrough for AI – at least in the public eye. For the first time, a chat program could not only conduct credible conversations, but also write entire texts, independently generate program code and even write poems. The successor version now presented can do a lot more. And yet OpenAI boss Sam Altman warns: Don’t expect too much from it.

The new features sound really impressive. According to OpenAI, the new program is now able to process and edit even more complex topics. While the previous version only accepted text input, the new one can also recognize images. And: In contrast to its predecessor, GPT-4 is said to “hallucinate” less frequently, i.e. to present things as facts that it simply made up. The previous version 3.5 had done this regularly with great self-confidence.

GPT-4 can do that

All three skills are each an important step. The benefit of the increased problem-solving ability is almost self-explanatory. GPT-4 can now be faced with even more complex tasks – and still master them. This is particularly evident in the impressive list of language and knowledge tests, which are actually aimed at human participants, that the program now passes. The new version can even pass the so-called “Bar Exam”, i.e. the admission test for lawyers in the USA. And could practice there.

Almost more important, however, is the new ability to recognize more than just text content. ChatGPT can now also process images. The result is simply impressive. ChatGPT can not only recognize what is shown, but also put it in context. For example, it recognizes the irony in meme images. Or program a website on the fly that looks like the very rough pencil sketch uploaded by the programmer.


From the Stone Age to the Atomic Bomb: Animation fuses millennia of war in 60 seconds

The reduction of the fantasy answers is also an essential advance. If you asked the previous version for factual statements, you always got credible-sounding, well-formulated and very self-confidently presented statements, but you could not rely on their truthfulness. Again and again the program simply thought up facts. This made the function virtually useless: You could only believe statements that you could have made yourself. With the new version, such failures should now occur 40 percent less often. However, it remains to be seen whether GPT-4 is actually more reliable as a result.

With the announcement, GPT-4 is also released for users. The company offers it as part of its subscriptions, and the model also supports the AI ​​search in Microsoft’s Bing available for selected users.

No AI revolution

Anyone hoping for a revolution is likely to be disappointed. With the exception of image recognition, the improvements in the new version are more likely to be found under the hood. They are hardly noticeable when in use. Although the program can do much more than its predecessor, the improvements are more in the details than they represent a revolution.

According to OpenAI, it is very important that the new version is not overestimated. Although she advertises the program as “more creative and collaborative than ever before”, boss Sam Altman emphasized on Twitter that it was “still error-prone and limited”. “It still looks a lot more impressive when you try it out for the first time than if you use it for a longer period of time,” he tries to dampen expectations.

“People are begging to be disappointed”

Even GPT-4 is still “just” a very impressive language model – and not a fully-fledged general AI. Like its predecessor, to put it simply, it guesses which word should come next in a highly complex selection process. Real understanding of the processed data or even real intelligence is not behind it.

Such general artificial intelligence – known in the industry as Artificial General Intelligence (AGI) – is still a long way off. Altman also warns against confusing his program with one. “People are kind of begging to be disappointed,” Altman said in an interview earlier this year. “We don’t have an AGI, but somehow that’s expected of us.”

Microsoft offers the most important reason for some skepticism. As part of the presentation, the group revealed that the new version GPT-4 has been working behind its Bing chatbot for weeks. Microsoft’s bot also made headlines because it disturbed users with sometimes strange emotional outbursts (find out more here). Ultimately, the group pulled the brakes. And put his chatbot on a much shorter leash. Despite the new technology.

Sources: OpenAI presentation, Sam Altman, StrictlyVC


source site-5