“we need bigger GPUs”

Nvidia’s CEO yesterday presented his company’s new GPU dedicated to artificial intelligence (AI), the B200. Based on a new architecture called Blackwell, this chip increases performance while reducing energy consumption.

Nvidia GPU Blackwell B200 GB200 IA Jensen Huang
DR

Last night, the man who claims that AI will surpass humans within 5 years presented a new advance in this field. Nvidia CEO Jensen Huang revealed in front of an audience of 18,000 people the Blackwell B200 chip and the GB200 “superchip”, designed for artificial intelligence calculations. To introduce his new creations, the co-founder of Nvidia declared: “ We need bigger GPUs “. The color (green) is announced.

What does Nvidia’s Blackwell B200 chip offer?

Currently, the most common GPU for AI calculations is the Nvidia H100, a chip which notably equips Tesla’s supercomputer. This component costs more than $40,000 per unit between companies in the sector like OpenAI. Older cards, like the A100 although launched in 2020, still sell for $20,000. These chips made Nvidia the third largest market capitalization in the world, just behind Apple.

Blackwell chips go further than these wildly successful GPUs. The new B200 GPU delivers up to 20 petaflops of FP4 power thanks to 208 billion transistors. The GB200 superchip combines two of these GPUs with a Grace processor, to offer 30 times better performance for inference calculations, that is to say those intended to do turn large language models like GPT.

On a GPT-3 LLM benchmark with 175 billion parameters, Nvidia claims that the GB200 delivers performance seven times higher than that of an H100. Nvidia claims that it also offers four times faster learning speed, i.e. calculations intended for form major language models.

Read > DLSS: what is this graphics technology from Nvidia?

The B200 GPU is more energy efficient than other Nvidia chips

But it is not only in terms of performance that the progress is significant. Everyone knows that the power consumption of AI is enormous. These cards are much more effective. “GB200 reduces cost and power consumption up to 25 times less” compared to an H100, says Nvidia.

As an example, training a model with 1.8 trillion parameters would previously have required 8,000 GPU Hoppers and 15 megawatts of power, according to Nvidia. Today, Jensen Huang explains that 2,000 Blackwell GPUs can do this using just four megawatts.

A great technological advance, but not in the opinion of investors! The announcement of the Blackwell architecture and the B200 GPU caused Nvidia’s stock to fall by 1.77%, instead of continuing the insolent increase of the last twelve months. Nvidia will not overtake Apple. For now at least.

Source : Bloomberg

  • Nvidia CEO Jensen Huang presented his new GPU dedicated to AI calculations last night.
  • The GPU combines into a superchip, the GB200, which offers increased performance.
  • It is also in terms of power consumption that the chip offers a huge advance.

source site