DeepSeek-R1

DeepSeek-R1 is a 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token, trained via large-scale reinforcement learning with a focus on reasoning capabilities. It incorporates two RL stages for discovering improved reasoning patterns and aligning with human preferences, along with two SFT stages for seeding reasoning and non-reasoning capabilities. The model achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.

o4-mini

DeepSeek-R1o4-mini
Provider
Web Site
Release Date
Jan 21, 2025
3 months ago
Apr 16, 2025
1 week ago
Modalities
text ?
text ?
images ?
API Providers
DeepSeek, HuggingFace
OpenAI API
Knowledge Cut-off Date
Unknown
-
Open Source
Yes
No
Pricing Input
$0.55 per million tokens
$1.10 per million tokens
Pricing Output
$2.19 per million tokens
$4.40 per million tokens
MMLU
90.8%
Pass@1
Source
fort
MMLU Pro
84%
EM
Source
-
MMMU
-
81.6%
Source
HellaSwag
-
-
HumanEval
-
14.28%
Source
MATH
-
-
GPQA
71.5%
Pass@1
Source
81.4%
Source
IFEval
83.3%
Prompt Strict
Source
-
Mobile Application

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.