NVIDIA GB300 NVL72

Type: Platform Tags: NVIDIA, GB300 NVL72, Blackwell Ultra, rack-scale AI, NVLink, DGX, AI factory Related: NVIDIA-DGX-B300, NVIDIA-DGX, NVIDIA-DGX-SuperPOD, NVIDIA-DGX-SuperPOD-B300-Spectrum-4-Ethernet-RA, NVIDIA-DGX-SuperPOD-B300-Quantum-X800-InfiniBand-RA, NVIDIA-Blackwell-Architecture, NVIDIA-GB200-NVL72, NVIDIA-NVL72-AI-Factory, NVIDIA-Mission-Control, NVIDIA-Spectrum-X, NVIDIA-Spectrum-X-Validated-Solution-Stack, NVIDIA-Quantum-X800-InfiniBand, NVLink, NVIDIA-Enterprise-AI-Factory Sources: https://www.nvidia.com/en-us/data-center/gb300-nvl72/, https://www.nvidia.com/en-us/data-center/dgx-b300/, https://www.nvidia.com/en-us/data-center/hgx/, https://docs.nvidia.com/enterprise-reference-architectures/nvl72-ai-factory-with-gb300-nvl72-dual-plane-networking-architecture.pdf, https://docs.nvidia.com/networking/software/spectrumx-solution-stack/index.html, https://docs.nvidia.com/dgx-superpod/reference-architecture/scalable-infrastructure-b300/latest/index.html, https://docs.nvidia.com/dgx-superpod/reference-architecture/scalable-infrastructure-b300-xdr/latest/index.html Last Updated: 2026-05-09

Summary

NVIDIA GB300 NVL72 is NVIDIA’s Blackwell Ultra rack-scale AI system configuration with 72 Blackwell Ultra GPUs connected as a single high-bandwidth NVLink domain. It follows the NVIDIA-GB200-NVL72 pattern while increasing memory and performance for large-model training and inference.

Detail

Purpose

GB300 NVL72 targets AI factories that need rack-scale GPU memory, GPU-to-GPU bandwidth, and dense inference throughput. It is the Blackwell Ultra step between the original GB200/B200 generation and the later NVIDIA-Vera-Rubin platform.

Public positioning

NVIDIA context

Treat GB300 NVL72 as the canonical Blackwell Ultra rack-scale system page. Use NVIDIA-Blackwell-Architecture for architecture-level details and NVIDIA-DGX-B300 for the DGX-branded system page.

Connections

Source Excerpts

  • NVIDIA’s GB300 NVL72 and DGX B300 pages position GB300 as a Blackwell Ultra rack-scale AI system for large AI training and inference.

Resources