DeepSeek-R1 is a 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token, trained via large-scale reinforcement learning with a focus on reasoning capabilities. It incorporates two RL stages for discovering improved reasoning patterns and aligning with human preferences, along with two SFT stages for seeding reasoning and non-reasoning capabilities. The model achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
Gemini 2.0 Pro is Google's most advanced model to date, delivering exceptional coding performance and handling complex prompts with ease. It comes equipped with enhanced capabilities such as native tool integration, image generation, and speech synthesis. Designed for advanced reasoning, the model supports multimodal inputs, including text, images, video, and audio. Available via Google AI Studio and Vertex AI, it offers substantial performance improvements over previous versions while maintaining high efficiency.
DeepSeek-R1 | Gemini 2.0 Pro | |
---|---|---|
Web Site
| ||
Provider
| ||
Chat
| ||
Release Date
| ||
Modalities
| text | text images voice video |
API Providers
| DeepSeek, HuggingFace | Google AI Studio, Vertex AI |
Knowledge Cut-off Date
| Unknown | 08.2024 |
Open Source
| Yes | No |
Pricing Input
| $0.55 per million tokens | $0.10 per million tokens |
Pricing Output
| $2.19 per million tokens | $0.40 per million tokens |
MMLU
| 90.8% Pass@1 Source | Not available |
MMLU-Pro
| 84% EM Source | 79.1% Source |
MMMU
| - | 72.7% Source |
HellaSwag
| - | Not available |
HumanEval
| - | Not available |
MATH
| - | 91.8% Source |
GPQA
| 71.5% Pass@1 Source | 64.7% Diamond Source |
IFEval
| 83.3% Prompt Strict Source | Not available |
SimpleQA
| - | - |
AIME 2024 | - | - |
AIME 2025 | - | - |
Aider Polyglot
| - | - |
LiveCodeBench v5
| - | - |
Global MMLU (Lite)
| - | - |
MathVista
| - | - |
Mobile Application |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.