NVIDIA DGX GB200 Blackwell 1440GB GPU

Driving the Future of AI Innovation

Artificial intelligence is revolutionizing industries by automating tasks, enhancing customer experiences, uncovering valuable insights, and driving innovation. Once a futuristic concept, AI is now a transformative force reshaping the way businesses operate. However, as AI workloads grow more sophisticated, they demand compute capacity that often exceeds what traditional enterprise infrastructure can provide. To unlock the full potential of AI, organizations require high-performance computing, storage, and networking solutions that are secure, scalable, and efficient.

Introducing the NVIDIA DGX™ B200, the newest breakthrough in the NVIDIA DGX platform. This next-generation AI solution is powered by NVIDIA Blackwell GPUs and advanced high-speed interconnects, redefining the possibilities of generative AI. Equipped with eight Blackwell GPUs, the DGX B200 delivers exceptional performance, featuring an impressive 1.4 terabytes (TB) of GPU memory and 64 terabytes per second (TB/s) of memory bandwidth. It is purpose-built to handle the most demanding enterprise AI workloads with unmatched efficiency.

With the NVIDIA DGX B200, enterprises can empower their data scientists and developers with a universal AI supercomputer. This accelerates innovation, reduces time to insight, and helps businesses fully harness the transformative potential of AI.

Contact us for pricing 

Tech Specs & Customization


NVIDIA DGX GB200 Blackwell 1,440GB 4TB AI Supercomputer

GPU: 8x NVIDIA Blackwell GPUs

GPU Memory: 1,440GB total

Performance: 72 petaFLOPS for training, 144 petaFLOPS for inference

NVIDIA® NVSwitch™: 2x

System Power Usage: Approximately 14.3kW max

CPU: 2 Intel® Xeon® Platinum 8570 Processors, 112 Cores total, 2.1 GHz (Base), 4 GHz (Max Boost)

System Memory: Up to 4TB

Networking:

4x OSFP ports for NVIDIA ConnectX-7 VPI, up to 400Gb/s InfiniBand/Ethernet

2x dual-port QSFP112 NVIDIA BlueField-3 DPU, up to 400Gb/s InfiniBand/Ethernet

Management Network: 10Gb/s onboard NIC with RJ45, 100Gb/s dual-port ethernet NIC, Host BMC with RJ45

Storage:

OS: 2x 1.9TB NVMe M.2

Internal: 8x 3.84TB NVMe U.2

Software: NVIDIA AI Enterprise, NVIDIA Base Command, DGX OS / Ubuntu

Rack Units (RU): 10 RU

System Dimensions: Height: 17.5in, Width: 19.0in, Length: 35.3in

Operating Temperature: 5–30°C (41–86°F)

 

NVIDIA GB200 NVL72

Type: Grace Blackwell Superchip

Memory Clock: 8Gbps HBM3E

Memory Bus Width: 2x2x4096-bit

Memory Bandwidth: 2x8TB/sec

VRAM: 384GB (2x2x96GB)

FP4 Dense Tensor: 20 PFLOPS

INT8/FP8 Dense Tensor: 10 P(FL)OPS

FP16 Dense Tensor: 5 PFLOPS

TF32 Dense Tensor: 2.5 PFLOPS

FP64 Dense Tensor: 90 TFLOPS

Interconnects: 2x NVLink 5 (1800GB/sec) + 2x PCIe 6.0 (256GB/sec)

GPU: 2x “Blackwell GPU”

GPU Transistor Count: 416B (2x2x104B)

TDP: 2700W

Manufacturing Process: TSMC 4NP

Interface: Superchip

Architecture: Grace + Blackwell

Q4 2024 RELEASE. Inquire for more information, lead times and pricing details.