DeepSeek-R1

DeepSeek-R1 is a 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token, trained via large-scale reinforcement learning with a focus on reasoning capabilities. It incorporates two RL stages for discovering improved reasoning patterns and aligning with human preferences, along with two SFT stages for seeding reasoning and non-reasoning capabilities. The model achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.

Grok 3 Beta

Grok 3 is xAI's most advanced model, trained on the Colossus supercluster with 10 times the computational power of previous state-of-the-art models. It boasts a 1M-token context window and advanced reasoning capabilities, enhanced through large-scale reinforcement learning, enabling deep thought processes ranging from seconds to minutes for solving complex problems. The model achieves top-tier performance across academic benchmarks and real-world user evaluations, earning an Elo score of 1402 in the Chatbot Arena. It was released alongside Grok 3 Mini, a cost-efficient variant optimized for streamlined reasoning.

DeepSeek-R1Grok 3 Beta
Provider
Web Site
Release Date
Jan 21, 2025
3 months ago
Jan 19, 2025
3 months ago
Modalities
text ?
text ?
images ?
video ?
API Providers
DeepSeek, HuggingFace
xAI
Knowledge Cut-off Date
Unknown
2025-01
Open Source
Yes
No
Pricing Input
$0.55 per million tokens
Not available
Pricing Output
$2.19 per million tokens
Not available
MMLU
90.8%
Pass@1
Source
Not available
MMLU Pro
84%
EM
Source
79.9%
Base model
Source
MMMU
-
78%
With Think mode
Source
HellaSwag
-
Not available
HumanEval
-
Not available
MATH
-
Not available
GPQA
71.5%
Pass@1
Source
84.6%
With Think mode, Diamond
Source
IFEval
83.3%
Prompt Strict
Source
Not available
Mobile Application

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.