DeepSeek-R1 is a 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token, trained via large-scale reinforcement learning with a focus on reasoning capabilities. It incorporates two RL stages for discovering improved reasoning patterns and aligning with human preferences, along with two SFT stages for seeding reasoning and non-reasoning capabilities. The model achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
Gemini 2.5 Pro is Google's most advanced AI model, engineered for deep reasoning and thoughtful response generation. It outperforms on key benchmarks, demonstrating exceptional logic and coding proficiency. Optimized for building dynamic web applications, autonomous code systems, and code adaptation, it delivers high-level performance. With built-in multimodal capabilities and an extended context window, the model efficiently processes large datasets and integrates diverse information sources to tackle complex challenges.
DeepSeek-R1 | Gemini 2.5 Pro | |
---|---|---|
Provider | ||
Web Site | ||
Release Date | Jan 21, 2025 4 months ago | Mar 25, 2025 2 months ago |
Modalities | text | text images voice video |
API Providers | DeepSeek, HuggingFace | Google AI Studio, Vertex AI, Gemini app |
Knowledge Cut-off Date | Unknown | - |
Open Source | Yes | No |
Pricing Input | $0.55 per million tokens | Not available |
Pricing Output | $2.19 per million tokens | Not available |
MMLU | 90.8% Pass@1 Source | Not available |
MMLU Pro | 84% EM Source | Not available |
MMMU | - | 81.7% Source |
HellaSwag | - | Not available |
HumanEval | - | Not available |
MATH | - | Not available |
GPQA | 71.5% Pass@1 Source | 84.0% Diamond Science Source |
IFEval | 83.3% Prompt Strict Source | Not available |
Array | - | 52.9% |
AIME 2024 | - | 92.0% |
AIME 2025 | - | 86.7% |
Array | - | 74.0% / 68.6% |
Array | - | 70.4% |
Array | - | 89.8% |
Array | - | - |
Mobile Application |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.