DeepSeek-R1

DeepSeek-R1 is a 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token, trained via large-scale reinforcement learning with a focus on reasoning capabilities. It incorporates two RL stages for discovering improved reasoning patterns and aligning with human preferences, along with two SFT stages for seeding reasoning and non-reasoning capabilities. The model achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.

Mistral Large 2

Mistral Large 2, developed by Mistral, offers a 128K-token context window and is priced at $3.00 per million input tokens and $9.00 per million output tokens. Released on July 24, 2024, the model scored 84.0 on the MMLU benchmark in a 5-shot evaluation, demonstrating strong performance in diverse tasks.

DeepSeek-R1Mistral Large 2
Provider
Web Site
Release Date
Jan 21, 2025
3 months ago
Jun 24, 2024
9 months ago
Modalities
text ?
text ?
API Providers
DeepSeek, HuggingFace
Azure AI, AWS Bedrock, Google AI Studio, Vertex AI, Snowflake Cortex
Knowledge Cut-off Date
Unknown
Unknown
Open Source
Yes
Yes
Pricing Input
$0.55 per million tokens
$3.00 per million tokens
Pricing Output
$2.19 per million tokens
$9.00 per million tokens
MMLU
90.8%
Pass@1
Source
84%
5-shot
Source
MMLU Pro
84%
EM
Source
50.69%
Source
MMMU
-
Not available
HellaSwag
-
Not available
HumanEval
-
Not available
MATH
-
1.13%
Source
GPQA
71.5%
Pass@1
Source
24.94%
IFEval
83.3%
Prompt Strict
Source
84.01%
Mobile Application
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.