Nvidia has introduced new Blackwell-powered systems that they claim will empower enterprises to create large-scale AI factories and data centers to propel generative AI advancements. Presented at the Computex trade show in Taipei, these new systems include options with single or multiple GPUs, either x86 or Grace-based processors, and are available with either air or liquid cooling depending on the needs of the application.
Additionally, Nvidia shared updates about its Nvidia MGX modular reference design platform, initially launched at last year’s Computex, which now integrates Blackwell products. During a media briefing prior to the event, they mentioned that the new Nvidia GB200 NVL2 platform, a more compact version of the GB200 NVL72 unveiled in March, enhances data processing speeds by up to 18 times and operates at eight times the energy efficiency of x86 CPUs due to its Blackwell architecture.
The Blackwell GPU architecture, introduced during Nvidia’s GTC event in March and succeeding Nvidia’s Hopper GPU, along with the Arm-based Grace CPU, are integral components of the GB200 systems.
“The GB200 NVL2 platform marks a significant milestone in bringing generative AI to every datacenter,” expressed Dion Harris, Nvidia’s director of accelerated computing, at the briefing.
The company announced that it has expanded its partnership base to include ten additional companies that will integrate the Blackwell architecture into their offerings. These companies, ASRock Rack, Asus, Gigabyte, Ingrasys, Inventec, Pegatron, QCT, Supermicro, Wistron, and Wiwynn, will develop systems encompassing cloud, on-premises, embedded, and edge AI applications utilizing the firm’s advanced GPUs and networking solutions.
“Nvidia has firmly established itself as a leader in defining the essential architecture and infrastructure for driving AI innovation,” stated Thomas Randall, director of AI market research at Info-Tech Research Group, via email. “The latest announcements further solidify Nvidia’s role, embedding its technologies at the core of computer production, AI factories, and the democratization of generative AI for developers.”
However, this development might raise concerns among data center managers, according to Alvin Nguyen, a senior analyst at Forrester. “I foresee that systems powered by Blackwell will become the AI accelerator of choice, similar to Nvidia’s previous offerings,” Nguyen expressed in an email. “The more powerful variants will necessitate liquid cooling solutions, prompting significant changes in the data center landscape. Not all data centers are equipped to handle the necessary power and water requirements to support these robust systems extensively. Consequently, this will lead to increased investments in data center upgrades, construction of new facilities, partnerships with colocation and cloud services, and the adoption of alternative, less energy-intensive accelerators from competitors.”
On the networking front, Nvidia announced that Spectrum-X, an accelerated networking platform designed for Ethernet-based AI clouds that was unveiled at Computex last year, is now generally available. Furthermore, Amit Katz, VP networking, said that Nvidia is accelerating the update cadence: New products will be released annually to provide increased bandwidth and ports, and enhanced software feature sets and programmability.
The platform architecture, with its DPUs (data processing units) optimized for north-south traffic and its SuperNIC optimized for east-west, GPU to GPU traffic, makes sense, Forrester’s Nguyen noted, but it does add complexity. “This is a complementary solution that will help drive the adoption of larger AI solutions like SuperPods and AI factories,” he said. “There is additional complexity using this, but it will ultimately help Nvidia customers who are implementing large scale AI infrastructure.”
But, he added, “Nvidia is pushing AI infrastructure with another technology standard that will end to be proprietary but they need to do this to keep ahead of the competition.”