17B x 128 Experts Instruct
Meta's latest flagship MoE model with 128 specialized experts and 128K context length. Superior instruction following with efficient inference using only 17B active parameters per token.
Massive MoE architecture with 128 specialized experts.
Industry-leading context length for complex tasks.
Fine-tuned for following complex instructions accurately.
Only 17B parameters active per token despite 128 experts.
Pay per token with auto-scaling
Reserved GPU for consistent performance
Process documents up to 128K tokens in a single context.
Analyze entire codebases with extended context.
Summarize and analyze lengthy research papers.
Build chatbots with excellent instruction following.