March 2024 release of DeepSeek V3 with 671B parameters in MoE architecture. Enhanced reasoning, coding, and multilingual capabilities with 64K context length.
Pay per token with auto-scaling
Reserved GPU for consistent performance
640 GB total
Recommended for full model
564 GB total
Premium performance option
Generate high-quality code across multiple programming languages.
Solve complex mathematical problems with step-by-step solutions.
Excellent performance in English and Chinese language tasks.
Process and analyze documents up to 64K tokens.