DeepSeek-R1

DeepSeek-R1 is a 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token, trained via large-scale reinforcement learning with a focus on reasoning capabilities. It incorporates two RL stages for discovering improved reasoning patterns and aligning with human preferences, along with two SFT stages for seeding reasoning and non-reasoning capabilities. The model achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.

Qwen2.5-VL-32B

Over the past five months since the release of Qwen2-VL, developers have built new models based on it, contributing valuable feedback. Now, Qwen2.5-VL introduces enhanced capabilities, including precise analysis of images, text, and charts, as well as object localization with structured JSON outputs. It understands long videos, identifies key events, and functions as an agent, interacting with tools on computers and phones. The model's architecture features dynamic video processing and an optimized ViT encoder for improved speed and accuracy.

DeepSeek-R1Qwen2.5-VL-32B
Web Site ?
Provider ?
Chat ?
Release Date ?
Modalities ?
text ?
text ?
images ?
video ?
API Providers ?
DeepSeek, HuggingFace
-
Knowledge Cut-off Date ?
Unknown
Unknown
Open Source ?
Yes
Yes (Source)
Pricing Input ?
$0.55 per million tokens
$0
Pricing Output ?
$2.19 per million tokens
$0
MMLU ?
90.8%
Pass@1
Source
78.4%
Source
MMLU-Pro ?
84%
EM
Source
49.5%
MMMU ?
-
70%
HellaSwag ?
-
Not available
HumanEval ?
-
Not available
MATH ?
-
82.2%
GPQA ?
71.5%
Pass@1
Source
46.0%
Diamond
IFEval ?
83.3%
Prompt Strict
Source
Not available
SimpleQA ?
-
-
AIME 2024
-
-
AIME 2025
-
-
Aider Polyglot ?
-
-
LiveCodeBench v5 ?
-
-
Global MMLU (Lite) ?
-
-
MathVista ?
-
-
Mobile Application
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.