DeepSeek-R1 is a 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token, trained via large-scale reinforcement learning with a focus on reasoning capabilities. It incorporates two RL stages for discovering improved reasoning patterns and aligning with human preferences, along with two SFT stages for seeding reasoning and non-reasoning capabilities. The model achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
Mistral Large 2, developed by Mistral, offers a 128K-token context window and is priced at $3.00 per million input tokens and $9.00 per million output tokens. Released on July 24, 2024, the model scored 84.0 on the MMLU benchmark in a 5-shot evaluation, demonstrating strong performance in diverse tasks.
DeepSeek-R1 | Mistral Large 2 | |
---|---|---|
Web Site
| ||
Provider
| ||
Chat
| ||
Release Date
| ||
Modalities
| text | text |
API Providers
| DeepSeek, HuggingFace | Azure AI, AWS Bedrock, Google AI Studio, Vertex AI, Snowflake Cortex |
Knowledge Cut-off Date
| Unknown | Unknown |
Open Source
| Yes | Yes |
Pricing Input
| $0.55 per million tokens | $3.00 per million tokens |
Pricing Output
| $2.19 per million tokens | $9.00 per million tokens |
MMLU
| 90.8% Pass@1 Source | 84% 5-shot Source |
MMLU-Pro
| 84% EM Source | 50.69% Source |
MMMU
| - | Not available |
HellaSwag
| - | Not available |
HumanEval
| - | Not available |
MATH
| - | 1.13% Source |
GPQA
| 71.5% Pass@1 Source | 24.94% |
IFEval
| 83.3% Prompt Strict Source | 84.01% |
SimpleQA
| - | - |
AIME 2024 | - | - |
AIME 2025 | - | - |
Aider Polyglot
| - | - |
LiveCodeBench v5
| - | - |
Global MMLU (Lite)
| - | - |
MathVista
| - | - |
Mobile Application | - |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.