DeepSeek-R1 is a 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token, trained via large-scale reinforcement learning with a focus on reasoning capabilities. It incorporates two RL stages for discovering improved reasoning patterns and aligning with human preferences, along with two SFT stages for seeding reasoning and non-reasoning capabilities. The model achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
DeepSeek-R1 | Llama 4 Maverick | |
---|---|---|
Provider | ||
Web Site | ||
Release Date | Jan 21, 2025 3 months ago | Apr 05, 2025 2 weeks ago |
Modalities | text | text images video |
API Providers | DeepSeek, HuggingFace | Meta AI, Hugging Face, Fireworks, Together, DeepInfra |
Knowledge Cut-off Date | Unknown | 2024-08 |
Open Source | Yes | Yes (Source) |
Pricing Input | $0.55 per million tokens | Not available |
Pricing Output | $2.19 per million tokens | Not available |
MMLU | 90.8% Pass@1 Source | Not available |
MMLU Pro | 84% EM Source | 80.5% Source |
MMMU | - | 73.4% Source |
HellaSwag | - | Not available |
HumanEval | - | Not available |
MATH | - | Not available |
GPQA | 71.5% Pass@1 Source | 69.8% Diamond Source |
IFEval | 83.3% Prompt Strict Source | Not available |
Mobile Application | - |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.