DeepSeek-R1 is a 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token, trained via large-scale reinforcement learning with a focus on reasoning capabilities. It incorporates two RL stages for discovering improved reasoning patterns and aligning with human preferences, along with two SFT stages for seeding reasoning and non-reasoning capabilities. The model achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
Claude Sonnet 4 | DeepSeek-R1 | |
---|---|---|
Provider | ||
Web Site | ||
Release Date | May 22, 2025 3 days ago | Jan 21, 2025 4 months ago |
Modalities | text images | text |
API Providers | Anthropic API, Amazon Bedrock, Google Cloud's Vertex AI | DeepSeek, HuggingFace |
Knowledge Cut-off Date | Unknown | Unknown |
Open Source | No | Yes |
Pricing Input | $3 per million tokens | $0.55 per million tokens |
Pricing Output | $15 per million tokens | $2.19 per million tokens |
MMLU | 86.5% Source | 90.8% Pass@1 Source |
MMLU Pro | - | 84% EM Source |
MMMU | 74.4% Source | - |
HellaSwag | - | - |
HumanEval | - | - |
MATH | - | - |
GPQA | 75.4% Diamond Source | 71.5% Pass@1 Source |
IFEval | - | 83.3% Prompt Strict Source |
Array | - | - |
AIME 2024 | - | - |
AIME 2025 | 75.5% Source | - |
Array | - | - |
Array | - | - |
Array | - | - |
Array | - | - |
Mobile Application |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.