DeepSeek-R1 is a 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token, trained via large-scale reinforcement learning with a focus on reasoning capabilities. It incorporates two RL stages for discovering improved reasoning patterns and aligning with human preferences, along with two SFT stages for seeding reasoning and non-reasoning capabilities. The model achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
The OpenAI o3-mini is a high-speed, cost-effective reasoning model designed for STEM applications, with strong performance in science, mathematics, and coding. Launched in January 2025, it includes essential developer features such as function calling, structured outputs, and developer messages. The model offers three reasoning effort levels—low, medium, and high—allowing users to optimize between deeper analysis and faster response times. Unlike the o3 model, it lacks vision capabilities. Initially available to select developers in API usage tiers 3-5, it can be accessed via the Chat Completions API, Assistants API, and Batch API.
DeepSeek-R1 | o3-mini | |
---|---|---|
Provider | ||
Web Site | ||
Release Date | Jan 21, 2025 3 months ago | Jan 31, 2025 2 months ago |
Modalities | text | text |
API Providers | DeepSeek, HuggingFace | OpenAI API |
Knowledge Cut-off Date | Unknown | Unknown |
Open Source | Yes | No |
Pricing Input | $0.55 per million tokens | $1.10 per million tokens |
Pricing Output | $2.19 per million tokens | $4.40 per million tokens |
MMLU | 90.8% Pass@1 Source | 86.9% pass@1, high effort Source |
MMLU Pro | 84% EM Source | Not available |
MMMU | - | Not available |
HellaSwag | - | Not available |
HumanEval | - | Not available |
MATH | - | 97.9% pass@1, high effort Source |
GPQA | 71.5% Pass@1 Source | 79.7% 0-shot, high effort Source |
IFEval | 83.3% Prompt Strict Source | Not available |
Mobile Application |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.