The world's most powerful GPU for large language model training. 141GB of HBM3e memory and 4.8 TB/s bandwidth for training models with hundreds of billions of parameters.
Flexible pricing options to match your workload requirements.
Pay as you go with no commitment
Save 15% with monthly commitment
Maximum savings with annual commitment
141GB HBM3e - 76% more memory than H100 for larger models and batch sizes.
4.8 TB/s memory bandwidth for faster data movement and reduced bottlenecks.
900 GB/s GPU-to-GPU bandwidth for efficient multi-GPU scaling.
Automatic mixed precision with FP8 support for 2x throughput on transformers.
Train models with hundreds of billions of parameters using 141GB of HBM3e memory.
Build vision-language models that require massive memory for image and text processing.
Scale across multiple H200 GPUs with 900 GB/s NVLink interconnect.
Deploy large models with enough memory for long context windows.