Cloud

Nvidia Blackwell chips are next hot commodity for megascalers

Megascale customers line up for Nvidia’s Blackwell AI chips.
article cover

Justin Sullivan/Getty Images

less than 3 min read

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

Megascale customers jumping at the first batches of Nvidia’s new Blackwell chips, assuring the company’s dominance in the data center GPU market.

First announced by Nvidia CEO Jensen Huang at the GTC conference in San Jose, California, last month, the Blackwell GPUs contain 208 billion transistors and 192 gigabytes of HMB3 memory with eight terabytes/second of bandwidth, the company says. That’s so many transistors that technically Blackwell chips are two GPU dies connected via a high-bandwidth interface.

The line comes in three variants: the B100, the B200, and the GB200, a unit combining two B200s (four dies total) plus a Grace CPU.

The B200 has a max theoretical compute of 20 petaflops, which Tom’s Hardware calculated to perform around 2.5x as fast as its predecessor, the H100 line. Each B200 unit sucks up a massive 1,200 watts of power, up from the H100’s 700 watts.

That extra performance has attracted significant attention from hyperscale customers, the target market for Nvidia’s NVL72 rack-scale architecture housing 32 GB200 chips.

CIO Dive reported that Nvidia has snagged the continued interest of Amazon Web Services, which has struck a deal to deploy them along with Amazon Elastic Compute Cloud (EC2) to accelerate AI training. Microsoft has another deal to use Blackwell GPUs across Azure for “natural language processing, computer vision, speech recognition and more.”

Google has announced it will be integrating Blackwell GPUs into an AI platform for enterprise developers building large language models (LLMs), while Oracle said the new GPU will help support cloud infrastructure and enable governments and companies to deploy “sovereign AI” and “AI factories.”

Huang told GTC attendees the chips will cost “between $30,000 and $40,000” each, or somewhere around the peak price the H100 commanded in reseller markets. That easily puts the cost of the NVL72 rack-scale systems in the six- to seven figure range.

“Accelerated computing has reached the tipping point—general purpose computing has run out of steam,” Huang said.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.