NVIDIA DGX B200

Type: Platform Tags: NVIDIA, DGX B200, Blackwell, DGX, AI factory, NVLink, BlueField-3, ConnectX-7, AI Enterprise, Mission Control Related: NVIDIA-DGX, NVIDIA-DGX-B300, NVIDIA-DGX-BasePOD, NVIDIA-DGX-BasePOD-B200-H200-H100-RA, NVIDIA-DGX-SuperPOD, NVIDIA-DGX-SuperPOD-B200-RA, NVIDIA-DGX-SuperPOD-GB200-RA, NVIDIA-Blackwell-Architecture, NVIDIA-GB200-NVL72, NVIDIA-Mission-Control, NVIDIA-AI-Enterprise, NVIDIA-BlueField-DPU, NVIDIA-ConnectX-InfiniBand, NVIDIA-Quantum-InfiniBand, NVIDIA-Spectrum-X, NVLink, NVIDIA-MIG, NVIDIA-Enterprise-AI-Factory Sources: https://www.nvidia.com/en-us/data-center/dgx-b200/, https://docs.nvidia.com/dgx/dgxb200-user-guide/, https://docs.nvidia.com/dgx-basepod/reference-architecture-infrastructure-foundation-enterprise-ai/latest/index.html, https://docs.nvidia.com/dgx-superpod/reference-architecture-scalable-infrastructure-b200/latest/index.html Last Updated: 2026-05-09

Summary

NVIDIA DGX B200 is NVIDIA’s Blackwell-generation DGX system for enterprise AI factories. It combines eight NVIDIA Blackwell GPUs, fifth-generation NVLink/NVSwitch, NVIDIA networking, DGX software, NVIDIA-Mission-Control, and NVIDIA-AI-Enterprise into a unified develop-to-deploy platform for training, fine-tuning, inference, recommender systems, and chatbots.

Detail

Purpose

DGX B200 is the DGX system layer between Hopper-generation DGX H100/H200 and Blackwell Ultra systems such as NVIDIA-DGX-B300. Use this page for the DGX B200 product/system identity, while NVIDIA-Blackwell-Architecture covers the GPU architecture and NVIDIA-GB200-NVL72 covers the rack-scale Grace Blackwell NVL72 system.

Key specifications

  • Eight NVIDIA Blackwell GPUs with 1,440 GB total HBM3e memory and 64 TB/s HBM3e bandwidth.
  • FP4 Tensor Core performance listed as 144 PFLOPS sparse / 72 PFLOPS dense.
  • FP8 Tensor Core performance listed as 72 PFLOPS sparse.
  • Two NVIDIA NVSwitch devices with 14.4 TB/s aggregate NVLink bandwidth.
  • Two Intel Xeon Platinum 8570 processors, 112 total CPU cores, and 2 TB system memory configurable to 4 TB.
  • Four OSFP ports serving eight single-port NVIDIA-ConnectX-InfiniBand ConnectX-7 VPI adapters for up to 400 Gb/s InfiniBand/Ethernet.
  • Two dual-port QSFP112 NVIDIA-BlueField-DPU BlueField-3 DPUs for storage and management networking.
  • DGX OS / Ubuntu operating system, NVIDIA-AI-Enterprise optimized AI software, and NVIDIA-Mission-Control operations/orchestration with Run:ai technology.

NVIDIA context

DGX B200 is the foundation of Blackwell-era DGX deployments and connects directly to NVIDIA-DGX-BasePOD and NVIDIA-DGX-SuperPOD as enterprise AI factory reference patterns. It should be discussed together with NVIDIA-DGX-B300 when comparing Blackwell versus Blackwell Ultra system choices, and with NVIDIA-GB200-NVL72 when a customer needs rack-scale Grace Blackwell NVLink domains.

Reference architecture placement

Connections

Source Excerpts

  • NVIDIA describes DGX B200 as the foundation for an AI factory and a unified AI platform for develop-to-deploy pipelines.
  • The product page lists eight Blackwell GPUs, 1,440 GB total GPU memory, 14.4 TB/s aggregate NVLink bandwidth, ConnectX-7 VPI networking, BlueField-3 DPUs, AI Enterprise, Mission Control, and DGX OS.

Resources