Nvidia H200 NVL Graphic Card 141 GB
Revolutionary Performance for AI and Data Center Workloads
The NVIDIA H200 Tensor Core GPU in its PCIe form factor delivers exceptional performance for AI applications, boasting 141GB of memory and an incredible 4.8TB/s bandwidth. Designed for large-scale deployments, it supports up to 8 GPUs per server and leverages NVLink bridges for ultra-fast data transfer speeds of 900GB/s. With advanced tensor cores offering nearly 4,000 TFLOPS for FP8 and INT8 operations, the H200 PCIe is optimized for high-demand data center environments, scalable AI applications, and multi-tenant workloads, with MIG partitioning ensuring peak efficiency.
Contact us for pricing
Tech Specs
Specification | H200 NVL (PCIe) |
|---|---|
FP64 | 34 TFLOPS |
FP64 Tensor Core | 67 TFLOPS |
FP32 | 67 TFLOPS |
TF32 Tensor Core² | 989 TFLOPS |
BFLOAT16 Tensor Core² | 1,979 TFLOPS |
FP16 Tensor Core² | 1,979 TFLOPS |
FP8 Tensor Core² | 3,958 TFLOPS |
INT8 Tensor Core² | 3,958 TFLOPS |
GPU Memory | 141GB |
GPU Memory Bandwidth | 4.8TB/s |
Decoders | 7 NVDEC, 7 JPEG |
Confidential Computing | Supported |
Max Thermal Design Power (TDP) | Up to 600W (configurable) |
Multi-Instance GPUs | Up to 7 MIGs @16.5GB each |
Form Factor | PCIe |
Interconnect | 2- or 4-way NVIDIA NVLink bridge: 900GB/s, PCIe Gen5: 128GB/s |
Server Options | NVIDIA MGX™ H200 NVL partner and NVIDIA-Certified Systems with up to 8 GPUs |
NVIDIA AI Enterprise | Add-on |
Why Choose Us?
We deliver tailored, scalable, and cost-effective IT solutions that drive business success in a technology-driven world. Whether optimizing your data center, adopting AI, or securing your enterprise, we are committed to partnering with you on your digital transformation journey.
NEWSLETTER
Subscribe to receive updates, access to exclusive deals, and more.
+603 3290 5915
All brand names and trademarks are referred to here for descriptive purposes only and are the properties of their respective owners.
Please visit our Terms & Conditions.
© Will Imaging. All rights reserved.
