HPE and Nvidia Strengthen Their AI Collaboration to Drive Innovation

HPE and Nvidia are expanding their partnership, announcing several enhancements to their AI portfolio during the Nvidia GTC conference. Key developments include the introduction of a new server blade, expanded GPU support, and improvements to HPE’s private AI package, all designed to address the needs of enterprise customers engaged in growing AI workloads.

Among the highlights is the Nvidia Vera Rubin NVL72 rack-scale system, engineered to support over 1 trillion AI parameters. This powerful system, targeted primarily at service providers and cloud operators, is part of the broader suite of offerings that includes a new GX240 liquid-cooled compute blade for the HPE Cray Supercomputer GX5000.

Enhancements to HPE’s Private Cloud AI package also stand out. This integration combines Nvidia’s GPUs, networks, and software with HPE’s own AI memory and computing solutions, alongside GreenLake cloud support. Notably, HPE has increased the capacity of its network expansion racks, allowing support for up to 128 GPUs, which facilitates larger and more demanding AI tasks.

The updated HPE Private Cloud AI delivers a complete preconfigured hardware and software stack, including the latest Nvidia AI Enterprise software and new blueprints like Nvidia AI-Q for AI agents and Nvidia Omniverse for digital twins. Additional new features include:

  • Configurations that maintain isolation for secure deployments
  • Certification of HPE ProLiant Compute DL380a Gen12 servers for Fortanix Confidential AI, enabling secure on-premises AI model deployments
  • Support for Nvidia’s latest open models to ease infrastructure deployment
  • Standardization of RTX Pro 6000 Blackwell server GPUs across HPE AI factory configurations, enhancing offerings for various deployment scenarios.

At the high performance level, HPE launched one of the first Nvidia Vera CPU systems with the NVL72 rack-scale system, featuring cutting-edge technology for expansive AI model capabilities. Each rack can accommodate up to 640 Nvidia Vera CPUs and 56,320 ARM cores, making it a formidable contender in the supercomputing space.

Nvidia’s CEO, Jensen Huang, emphasized the significance of this advancement, noting that the systems orchestrating intelligence will increasingly become the driving force behind AI capabilities, enabling faster and more scalable AI systems.

For more information, refer to the following links:

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

How Invisalign Transformed Into the World's Leading User of 3D Printing Technology

Next Article

Justice Department Raises Concerns: Anthropic Not Fit for Warfighting Systems

Related Posts