Nvidia has four new compute-node reference designs for for demanding tasks including AI training, high-performance computing, digital twin modeling, and cloud graphics.
Nvidia unveiled high-performance computing (HPC) reference designs and new water-cooling technology for its GPUs at the annual Computex tradeshow in Taipei, Taiwan.
The reference designs employ Nvidia’s forthcoming Grace CPU and Grace Hopper Superchips, due next year. Grace is an Arm-based CPU – Nvidia’s first for the server market. Hopper is Nvidia’s next generation of GPU processors.
There are two superchips: the Grace Superchip, which combines two Grace CPU dies connected with the chipmaker’s super high-speed NVLink-C2C interconnect tech; and the Grace Hopper Superchip, which features one Grace CPU connected to one Hopper GPU, also connected directly to the CPU by NVLink-C2C.
These are part of Nvidia’s HGX line for large HPC deployments, where compute density is the order of the day. They come in 1U and 2U designs with two HGX Grace-Hopper nodes or four HGX Grace nodes in a single chassis. All four come with BlueField-3 networking processors.
The four designs are:
- Nvidia HGX Grace Hopper systems for AI training, inference and HPC
- Nvidia HGX Grace systems for HPC and supercomputing with a CPU-only design
- Nvidia OVX systems for digital twins (virtual copies of physical objects) and collaboration workloads
- Nvidia CGX systems for cloud graphics and gaming
“These new reference designs will enable our ecosystem to rapidly productize the servers that are optimized for Nvidia-accelerated computing software stacks, and [they] can be qualified as a part of our Nvidia-certified systems lineup,” said Paresh Kharya, director of datacenter computing, in a conference call with journalists.
Nvidia already has six partner vendors lined up to release systems in the first half of next year: Asus, Foxconn, Gigabyte, QCT, Supermicro, and Wiwynn.