DeepSeek-R1

DeepSeek-R1 is a 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token, trained via large-scale reinforcement learning with a focus on reasoning capabilities. It incorporates two RL stages for discovering improved reasoning patterns and aligning with human preferences, along with two SFT stages for seeding reasoning and non-reasoning capabilities. The model achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.

Llama 3.1 Nemotron 70B Instruct

NVIDIA's Llama 3.1 Nemotron 70B is a powerful language model optimized for delivering accurate and informative responses. Built on the Llama 3.1 70B architecture and enhanced with Reinforcement Learning from Human Feedback (RLHF),it achieves top performance in automatic alignment benchmarks. Designed for applications demanding high precision in response generation and helpfulness, this model is well-suited for a wide range of user queries across multiple domains.

DeepSeek-R1Llama 3.1 Nemotron 70B Instruct
Provider
Web Site
Release Date
Jan 21, 2025
3 months ago
Oct 15, 2023
1 year ago
Modalities
text ?
text ?
API Providers
DeepSeek, HuggingFace
OpenRouter
Knowledge Cut-off Date
Unknown
-
Open Source
Yes
Yes
Pricing Input
$0.55 per million tokens
$0.35 per million tokens
Pricing Output
$2.19 per million tokens
$0.40 per million tokens
MMLU
90.8%
Pass@1
Source
85%
5-shot
Source
MMLU Pro
84%
EM
Source
Not available
MMMU
-
Not available
HellaSwag
-
Not available
HumanEval
-
75%
Source
MATH
-
71%
Source
GPQA
71.5%
Pass@1
Source
Not available
IFEval
83.3%
Prompt Strict
Source
Not available
Mobile Application
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.