DeepSeek-R1

DeepSeek-R1 is a 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token, trained via large-scale reinforcement learning with a focus on reasoning capabilities. It incorporates two RL stages for discovering improved reasoning patterns and aligning with human preferences, along with two SFT stages for seeding reasoning and non-reasoning capabilities. The model achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.

Claude 3.7 Sonnet - Extended Thinking

Claude 3.7 Sonnet is Anthropic's most advanced AI model yet and the first hybrid reasoning system on the market. It offers both standard and extended thinking modes, with the latter providing transparent, step-by-step reasoning. The model demonstrates significant improvements in coding and front-end web development, achieving state-of-the-art results on SWE-bench Verified and TAU-bench. Available via Claude.ai, the Anthropic API, Amazon Bedrock, and Google Cloud's Vertex AI, it sets a new standard for intelligent AI-powered problem-solving.

DeepSeek-R1Claude 3.7 Sonnet - Extended Thinking
Provider
Web Site
Release Date
Jan 21, 2025
3 months ago
Feb 24, 2025
1 month ago
Modalities
text ?
text ?
images ?
API Providers
DeepSeek, HuggingFace
Claude.ai, Anthropic API, Amazon Bedrock, Google Cloud Vertex AI
Knowledge Cut-off Date
Unknown
-
Open Source
Yes
No
Pricing Input
$0.55 per million tokens
$3.00 per million tokens
Pricing Output
$2.19 per million tokens
$15.00 per million tokens
MMLU
90.8%
Pass@1
Source
Not available
MMLU Pro
84%
EM
Source
Not available
MMMU
-
75%
Source
HellaSwag
-
Not available
HumanEval
-
Not available
MATH
-
96.2%
Source
GPQA
71.5%
Pass@1
Source
84.8%
Diamond
Source
IFEval
83.3%
Prompt Strict
Source
93.2%
Source
Mobile Application

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.