GPU/GPU Stack/NVIDIA A30
Power Efficient

NVIDIA A30

Mainstream data center GPU for mixed AI and HPC workloads. 24GB HBM2 memory with exceptional power efficiency at just 165W TDP for sustainable deployments.

Key Specifications

ArchitectureAmpere
VRAM24 GB HBM2
Memory Bandwidth933 GB/s
CUDA Cores3,584
Tensor Cores224 (3rd Gen)
FP16 Performance165 TFLOPS

Technical Specifications

ArchitectureAmpere
VRAM24 GB HBM2
Memory Bandwidth933 GB/s
CUDA Cores3,584
Tensor Cores224 (3rd Gen)
FP16 Performance165 TFLOPS
TF32 Performance82 TFLOPS
TDP165W
MIG SupportUp to 4 instances
PCIeGen4 x16

Pricing Plans

Flexible pricing options to match your workload requirements.

On-Demand

Pay as you go with no commitment

₹90/hour
  • 1x NVIDIA A30 GPU
  • 12 vCPUs
  • 96 GB RAM
  • 250 GB NVMe SSD
  • No minimum commitment
  • Start/stop anytime
Most Popular

Reserved 1 Month

Save 15% with monthly commitment

₹38,250/month
  • 1x NVIDIA A30 GPU
  • 12 vCPUs
  • 96 GB RAM
  • 250 GB NVMe SSD
  • 15% discount
  • Priority support

Reserved 1 Year

Maximum savings with annual commitment

₹27,000/month
  • 1x NVIDIA A30 GPU
  • 12 vCPUs
  • 96 GB RAM
  • 250 GB NVMe SSD
  • 40% discount
  • Dedicated support

Why Choose A30

Power Efficient

165W TDP delivers excellent performance-per-watt for sustainable deployments.

MIG Support

Partition into up to 4 isolated GPU instances for multi-tenant deployments.

HBM2 Memory

High-bandwidth memory for balanced AI and HPC workloads.

Cost Effective

Mainstream pricing for production inference and development workloads.

Use Cases

Mixed Workloads

Run AI inference and HPC applications on the same infrastructure.

Development & Testing

Cost-effective GPU for ML model development and experimentation.

Edge AI

Deploy AI models at the edge with efficient power consumption.

Multi-Tenant Inference

Partition with MIG for multiple concurrent inference workloads.

Ready to Deploy A30?

Power-efficient GPU for mainstream AI and HPC workloads.