Llama 3.1 Nemotron 70B Instruct

NVIDIA's Llama 3.1 Nemotron 70B is a powerful language model optimized for delivering accurate and informative responses. Built on the Llama 3.1 70B architecture and enhanced with Reinforcement Learning from Human Feedback (RLHF),it achieves top performance in automatic alignment benchmarks. Designed for applications demanding high precision in response generation and helpfulness, this model is well-suited for a wide range of user queries across multiple domains.

Mistral Large 2

Mistral Large 2, developed by Mistral, offers a 128K-token context window and is priced at $3.00 per million input tokens and $9.00 per million output tokens. Released on July 24, 2024, the model scored 84.0 on the MMLU benchmark in a 5-shot evaluation, demonstrating strong performance in diverse tasks.

Llama 3.1 Nemotron 70B InstructMistral Large 2
Web Site ?
Provider ?
Chat ?
Release Date ?
Modalities ?
text ?
text ?
API Providers ?
OpenRouter
Azure AI, AWS Bedrock, Google AI Studio, Vertex AI, Snowflake Cortex
Knowledge Cut-off Date ?
-
Unknown
Open Source ?
Yes
Yes
Pricing Input ?
$0.35 per million tokens
$3.00 per million tokens
Pricing Output ?
$0.40 per million tokens
$9.00 per million tokens
MMLU ?
85%
5-shot
Source
84%
5-shot
Source
MMLU-Pro ?
Not available
50.69%
Source
MMMU ?
Not available
Not available
HellaSwag ?
Not available
Not available
HumanEval ?
75%
Source
Not available
MATH ?
71%
Source
1.13%
Source
GPQA ?
Not available
24.94%
IFEval ?
Not available
84.01%
SimpleQA ?
-
-
AIME 2024
-
-
AIME 2025
-
-
Aider Polyglot ?
-
-
LiveCodeBench v5 ?
-
-
Global MMLU (Lite) ?
-
-
MathVista ?
-
-
Mobile Application
-
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.