Category top banner badge image
GPU-CPU Platform for AI, Data Analytics, and HPC

NVIDIA GH200 Grace Hopper Superchip

value propositon

ARM Efficiency

NVIDIA Grace Hopper Superchip packs 72 ARM cores to deliver leading per-thread performance and higher energy efficiency than traditional x86 CPUs.

value propositon

Less Bottlenecks

NVIDIA NVLink-C2C interconnect is the heart of the superchip with 900GB/s bidirectional bandwidth between Grace and Hopper, increasing performance by minimizing data transfer latency between CPU and GPU.

value propositon

Scalability

NVIDIA NVLink Switch System scales DGX GH200 by connecting 256 NVIDIA Grace Hopper Superchips to build a seamless, high bandwidth system with 1:1 CPU-to-GPU ratio.

Fuel Discovery with NVIDIA Grace Hopper Superchip Platforms

Solution image
Highlights
CPU1x NVIDIA Grace Hopper GH200
MEMShared 480GB LPDDR5X & 96GB HBM3
GPU3x PCIe 5.0 x16 double-wide cards
SSD8x E1.S NVMe SSD Hotswap
Solution image
Highlights
CPU1x NVIDIA Grace Hopper GH200
MEMShared 480GB LPDDR5X & 96GB HBM3
GPU3x PCIe 5.0 x16 double-wide cards
STO8x E1.S NVMe SSD Hotswap
Solution image
Highlights
CPU2x NVIDIA Grace Hopper GH200 (1x per node)
MEMShared 960GB LPDDR5X & 192GB HBM3 (480GB & 96GB per node)
GPU4x PCIe 5.0 x16 double-wide cards (2x per node)
STO8x E1.S NVMe SSD hotswap (4x per node)

Power your infrastructure with NVIDIA Grace CPU Platforms

Solution image
Highlights
CPU2x NVIDIA Grace CPU (1x per node)
MEM960GB LPDDR5X (480GB per node)
GPU4x PCIe 5.0 x16 double-wide cards (2x per node)
STO8x E1.S NVMe SSD hotswap (4x per node)
Solution image
Highlights
CPU1x NVIDIA Grace CPU
MEM480GB LPDDR5X Memory
GPUUp to 4x double-wide accelerators - H100, L40S, NICs, and more
STO8x E1.S NVMe SSD hotswap
Nvidia-grace-hopper-gh200-applications

Supercharge with a Superchip

Unlocking new discoveries and solving complex problems need a more advanced means of computing than ever. NVIDIA Grace Hopper Superchip weaves a tight CPU and GPU integration to deliver a uniquely balanced, powerful, and efficient computing platform to accelerate AI training, simulation, and inferencing to tackle every industry’s next-generation challenges. NVIDIA Grace Hopper Superchip speeds up AI jobs like recommender systems, graph neural networks, and inferencing, as well as HPC jobs like databases management, molecular dynamics, multi-physics, and more.

Connecting Two Groundbreaking Architectures

NVIDIA GH200 Superchip combines the Grace CPU and Hopper GPU architectures using NVIDIA NVLink-C2C to deliver a CPU & GPU memory coherent, high-bandwidth, and low-latency superchip interconnect.
Grace - Data Center ARM CPU

NVIDIA Grace CPU, the brain of the superchip, is the first NVIDIA data center CPU. Built with 72 ARM Neoverse V2 CPU cores and 480GB of LPDDR5 memory, Grace delivers 53% more bandwidth at one-eighth the power per GB/s versus traditional DDR5, memory for optimal energy efficiency and bandwidth.

Hopper - Flagship GPU for AI

NVIDIA Hopper GPU utilizes the groundbreaking Transformer Engine capable of mixed FP8 and FP16 precision formats. With mixed precision, Hopper intelligently manages accuracy while gaining dramatic AI performance, 9X faster training and 30x faster inferencing than previous generation.

NVLink C2C - Coherent Memory Interconnect

The memory-coherent, high bandwidth, and low latency NVLink C2C interconnect is the heart of the Grace Hopper Superchip enabling speeds up to 900 GB/s of total bandwidth, 7x faster than the traditional PCIe 5. Address Translation Service (ATS) enables Grace and Hopper to share a single per-process page table enabled CPU and GPU threads to access all system allocated memory, minimizing latency and providing scalable distributed caching system that improves IO performance.

NVIDIA GH200 Specifications

Interested?

Talk to our experienced engineers for more information.