NVIDIA A100 Enterprise 40GB/80GB PCIe

The NVIDIA A100 Tensor Core GPU redefines performance at every scale, driving the world’s most powerful elastic data centers for AI, data analytics, and high-performance computing (HPC) applications.

As the cornerstone of the NVIDIA data center platform, the A100 delivers up to 20x the performance of the previous NVIDIA Volta generation. With Multi-Instance GPU (MIG) technology, it can scale efficiently or be partitioned into seven isolated GPU instances, offering a unified platform that adapts dynamically to changing workload demands in elastic data center environments.

Contact us for pricing

Tech Specs 

CUDA Cores

6912

Streaming Multiprocessors

108

Tensor Cores | Gen 3

432

GPU Memory

40 GB or 80 GB HBM2e ECC on by Default

Memory Interface

5120-bit

Memory Bandwidth

1555 GB/s

NVLink

2-Way, 2-Slot, 600 GB/s Bidirectional

MIG (Multi-Instance GPU) Support

Yes, up to 7 GPU Instances

FP64

9.7 TFLOPS

FP64 Tensor Core

19.5 TFLOPS

FP32

19.5 TFLOPS

TF32 Tensor Core

156 TFLOPS | 312 TFLOPS*

BFLOAT16 Tensor Core

312 TFLOPS | 624 TFLOPS*

FP16 Tensor Core

312 TFLOPS | 624 TFLOPS*

INT8 Tensor Core

624 TOPS | 1248 TOPS*

Thermal Solutions

Passive

vGPU Support

NVIDIA Virtual Compute Server (vCS)

System Interface

PCIE 4.0 x16

Update 06.01.2024: Production of this product ended in February 2024 and it is now designated as EOL (end-of-life). Remaining stock is limited. NVIDIA recommends the L40/L40S series as a replacement, along with the more budget-friendly RTX A6000 ADA or the high-performance, premium-priced H100. A direct successor has not yet been announced.