DeepSeek-R1

DeepSeek-R1 is a 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token, trained via large-scale reinforcement learning with a focus on reasoning capabilities. It incorporates two RL stages for discovering improved reasoning patterns and aligning with human preferences, along with two SFT stages for seeding reasoning and non-reasoning capabilities. The model achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.

Claude 3.5 Haiku

Claude 3.5 Haiku, developed by Anthropic, offers a 200,000-token context window. Pricing is set at $1 per million input tokens and $5 per million output tokens, with potential savings of up to 90% through prompt caching and 50% via the Message Batches API. Released on November 4, 2024, this model excels in code completion, interactive chatbots, data extraction and labeling, as well as real-time content moderation.

DeepSeek-R1Claude 3.5 Haiku
Web Site ?
Provider ?
Chat ?
Release Date ?
Modalities ?
text ?
text ?
API Providers ?
DeepSeek, HuggingFace
Anthropic, AWS Bedrock, Vertex AI
Knowledge Cut-off Date ?
Unknown
01.04.2024
Open Source ?
Yes
No
Pricing Input ?
$0.55 per million tokens
$0.80 per million tokens
Pricing Output ?
$2.19 per million tokens
$4.00
MMLU ?
90.8%
Pass@1
Source
Not available
MMLU-Pro ?
84%
EM
Source
65%
0-shot CoT
Source
MMMU ?
-
Not available
HellaSwag ?
-
Not available
HumanEval ?
-
88.1%
0-shot
Source
MATH ?
-
69.4%
0-shot CoT
Source
GPQA ?
71.5%
Pass@1
Source
Not available
IFEval ?
83.3%
Prompt Strict
Source
Not available
SimpleQA ?
-
-
AIME 2024
-
-
AIME 2025
-
-
Aider Polyglot ?
-
-
LiveCodeBench v5 ?
-
-
Global MMLU (Lite) ?
-
-
MathVista ?
-
-
Mobile Application

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.