DeepSeek-R1 is a 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token, trained via large-scale reinforcement learning with a focus on reasoning capabilities. It incorporates two RL stages for discovering improved reasoning patterns and aligning with human preferences, along with two SFT stages for seeding reasoning and non-reasoning capabilities. The model achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
DeepSeek-R1 | Qwen 3 | |
---|---|---|
Web Site
| ||
Provider
| ||
Chat
| ||
Release Date
| ||
Modalities
| text | - |
API Providers
| DeepSeek, HuggingFace | - |
Knowledge Cut-off Date
| Unknown | - |
Open Source
| Yes | Yes (Source) |
Pricing Input
| $0.55 per million tokens | - |
Pricing Output
| $2.19 per million tokens | - |
MMLU
| 90.8% Pass@1 Source | - |
MMLU-Pro
| 84% EM Source | - |
MMMU
| - | - |
HellaSwag
| - | - |
HumanEval
| - | - |
MATH
| - | - |
GPQA
| 71.5% Pass@1 Source | - |
IFEval
| 83.3% Prompt Strict Source | - |
SimpleQA
| - | - |
AIME 2024 | - | Source |
AIME 2025 | - | Source |
Aider Polyglot
| - | - |
LiveCodeBench v5
| - | - |
Global MMLU (Lite)
| - | - |
MathVista
| - | - |
Mobile Application | - |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.