DeepSeek-R1 is a 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token, trained via large-scale reinforcement learning with a focus on reasoning capabilities. It incorporates two RL stages for discovering improved reasoning patterns and aligning with human preferences, along with two SFT stages for seeding reasoning and non-reasoning capabilities. The model achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
Claude 3.7 Sonnet is Anthropic's most advanced model yet and the first hybrid reasoning AI on the market. It offers both standard and extended thinking modes, with the latter providing transparent, step-by-step reasoning. The model excels in coding and front-end web development, achieving state-of-the-art results on SWE-bench Verified and TAU-bench. Available via Claude.ai, the Anthropic API, Amazon Bedrock, and Google Cloud's Vertex AI, it sets a new benchmark for intelligent AI-driven problem-solving.
DeepSeek-R1 | Claude 3.7 Sonnet | |
---|---|---|
Provider | ||
Web Site | ||
Release Date | Jan 21, 2025 3 months ago | Feb 24, 2025 1 month ago |
Modalities | text | text images |
API Providers | DeepSeek, HuggingFace | Claude.ai, Anthropic API, Amazon Bedrock, Google Cloud Vertex AI |
Knowledge Cut-off Date | Unknown | - |
Open Source | Yes | No |
Pricing Input | $0.55 per million tokens | $3.00 per million tokens |
Pricing Output | $2.19 per million tokens | $15.00 per million tokens |
MMLU | 90.8% Pass@1 Source | Not available |
MMLU Pro | 84% EM Source | Not available |
MMMU | - | 71.8% Source |
HellaSwag | - | Not available |
HumanEval | - | Not available |
MATH | - | 82.2% Source |
GPQA | 71.5% Pass@1 Source | 68% Diamond Source |
IFEval | 83.3% Prompt Strict Source | 90.8% Source |
Mobile Application |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.