Nvidia’s Blackwell AI ‘superchip’ is the most powerful yet

- Advertisement -


The Nvidia GB200 Grace Blackwell Superchip

Nvidia

Nvidia has unveiled a “superchip” for training artificial intelligence models, the most powerful it has ever produced. The US computing firm, which has recently rocketed in value to become the world’s third-largest company, hasn’t yet revealed the cost of its new chips, but observers expect a high price tag that will make them accessible to only a few organisations.

The chips were announced by Nvidia CEO Jensen Huang at a press conference in San Jose, California, on 18 March. He showed off the company’s new Blackwell B200 graphics processing units (GPUs), each of which has 208 billion transistors – the tiny switches at the heart of modern computing devices – compared with the 80 billion transistors of Nvidia’s current-generation Hopper chips. He also revealed the GB200 Grace Blackwell Superchip, which combines two of the B200 chips.

Read more


Analogue chips can slash the energy used to run AI models

“Blackwell is just going to be an amazing system for generative AI,” said Huang. “And in the future, data centres are going to be thought of as AI factories.”

GPUs have become coveted hardware for any organisation seeking to train large AI models. During AI chip shortages in 2023, Elon Musk spoke of GPUs being “considerably harder to get than drugs” and some academic researchers without access bemoaned being “GPU poor”.

Nvidia claims its Blackwell chips can deliver 30 times performance improvement when running generative AI services based on large language models compared with Hopper GPUs, all while using 25 times less energy.

Sign up to our The Daily newsletter

The latest science news delivered to your inbox, every day.

Sign up to newsletter

It says that whereas OpenAI’s GPT-4 large language model required approximately 8000 Hopper GPUs and 15 megawatts of power to perform 90 days of training, the same AI training could be done using just 2000 Blackwell GPUs consuming 4 megawatts of power.

The company hasn’t yet revealed the cost of the Blackwell GPUs, but the price tag is likely to reach eye-watering levels, given that the Hopper GPUs already cost between $20,000 and $40,000 each. This focus on developing more powerful and expensive chips means they “will only be accessible to a select few organizations and countries”, says Sasha Luccioni at Hugging Face, a company that develops tools for sharing AI code and datasets. “Apart from the environmental impacts of this already very energy-intensive tech, this is truly a Marie Antoinette, ‘let them eat cake’ moment for the AI community,” she says.

Read more


IBM’s brain-inspired chip could be the fastest at running AI yet

The electricity demand from data centre expansions – largely driven by the generative AI boom – is expected to double by 2026, matching the energy consumption of Japan today. That can also come with steep carbon emissions costs if the data centres supporting AI training continue to rely on fossil fuel power plants.

Global demand for GPUs has also meant geopolitical complications for Nvidia amid growing tensions and strategic competition between the US and China. The US government has implemented export controls on advanced chip technologies to delay China’s AI development efforts in a move that it describes as vital to US national security – and that has forced Nvidia to create less powerful versions of its chips for Chinese customers.

Topics:

  • artificial intelligence
- Advertisement -

Latest articles

Related articles

error: Content is protected !!