DeepSeek-R1 is a 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token, trained via large-scale reinforcement learning with a focus on reasoning capabilities. It incorporates two RL stages for discovering improved reasoning patterns and aligning with human preferences, along with two SFT stages for seeding reasoning and non-reasoning capabilities. The model achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
Gemini 2.0 Flash Thinking is an advanced reasoning model designed to enhance performance and explainability by making its thought process visible. It excels in complex problem-solving, coding challenges, and mathematical reasoning, demonstrating step-by-step solutions. Optimized for tasks that demand detailed explanations and logical analysis, the model also features native tool integration, including code execution and Google Search capabilities.
DeepSeek-R1 | Gemini 2.0 Flash Thinking | |
---|---|---|
Web Site
| ||
Provider
| ||
Chat
| ||
Release Date
| ||
Modalities
| text | text images |
API Providers
| DeepSeek, HuggingFace | Google AI Studio, Vertex AI, Gemini API |
Knowledge Cut-off Date
| Unknown | 04.2024 |
Open Source
| Yes | No |
Pricing Input
| $0.55 per million tokens | Not available |
Pricing Output
| $2.19 per million tokens | Not available |
MMLU
| 90.8% Pass@1 Source | Not available |
MMLU-Pro
| 84% EM Source | Not available |
MMMU
| - | 75.4% Source |
HellaSwag
| - | Not available |
HumanEval
| - | Not available |
MATH
| - | Not available |
GPQA
| 71.5% Pass@1 Source | 74.2% Diamond Science Source |
IFEval
| 83.3% Prompt Strict Source | Not available |
SimpleQA
| - | - |
AIME 2024 | - | - |
AIME 2025 | - | - |
Aider Polyglot
| - | - |
LiveCodeBench v5
| - | - |
Global MMLU (Lite)
| - | - |
MathVista
| - | - |
Mobile Application |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.