DeepSeek-R1

DeepSeek-R1 is a 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token, trained via large-scale reinforcement learning with a focus on reasoning capabilities. It incorporates two RL stages for discovering improved reasoning patterns and aligning with human preferences, along with two SFT stages for seeding reasoning and non-reasoning capabilities. The model achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.

GPT-4.1 Nano

DeepSeek-R1GPT-4.1 Nano
Provider
Web Site
Release Date
Jan 21, 2025
3 months ago
Apr 14, 2025
3 weeks ago
Modalities
text ?
text ?
images ?
API Providers
DeepSeek, HuggingFace
OpenAI API
Knowledge Cut-off Date
Unknown
-
Open Source
Yes
No
Pricing Input
$0.55 per million tokens
$0.10 per million tokens
Pricing Output
$2.19 per million tokens
$0.40 per million tokens
MMLU
90.8%
Pass@1
Source
80.1%
Source
MMLU Pro
84%
EM
Source
-
MMMU
-
55.4%
Source
HellaSwag
-
-
HumanEval
-
-
MATH
-
-
GPQA
71.5%
Pass@1
Source
50.3%
Diamond
Source
IFEval
83.3%
Prompt Strict
Source
74.5%
Source
Array
-
-
AIME 2024
-
29.4%
Source
AIME 2025
-
-
Array
-
-
Array
-
-
Array
-
66.9%
Source
Array
-
56.2%
Image Reasoning
Source
Mobile Application

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.