Llama 3.1 Nemotron 70B Instruct

NVIDIA's Llama 3.1 Nemotron 70B is a powerful language model optimized for delivering accurate and informative responses. Built on the Llama 3.1 70B architecture and enhanced with Reinforcement Learning from Human Feedback (RLHF),it achieves top performance in automatic alignment benchmarks. Designed for applications demanding high precision in response generation and helpfulness, this model is well-suited for a wide range of user queries across multiple domains.

Nova Micro

Amazon Nova Micro is a text-only model optimized for cost and speed. With a context window of 128K tokens, it excels at tasks like text summarization, translation, interactive chat, and basic coding. Released as part of the Amazon Nova foundation models, it supports fine-tuning and distillation for customization on proprietary data.

Llama 3.1 Nemotron 70B InstructNova Micro
Web Site ?
Provider ?
Chat ?
Release Date ?
Modalities ?
text ?
text ?
API Providers ?
OpenRouter
Amazon Bedrock
Knowledge Cut-off Date ?
-
Purposefully not disclosed
Open Source ?
Yes
No
Pricing Input ?
$0.35 per million tokens
$0.04 per million tokens
Pricing Output ?
$0.40 per million tokens
$0.14 per million tokens
MMLU ?
85%
5-shot
Source
77.6%
CoT
Source
MMLU-Pro ?
Not available
-
MMMU ?
Not available
-
HellaSwag ?
Not available
-
HumanEval ?
75%
Source
81.1%
pass@1
Source
MATH ?
71%
Source
69.3%
CoT
Source
GPQA ?
Not available
40%
Main
Source
IFEval ?
Not available
87.2%
Source
SimpleQA ?
-
-
AIME 2024
-
-
AIME 2025
-
-
Aider Polyglot ?
-
-
LiveCodeBench v5 ?
-
-
Global MMLU (Lite) ?
-
-
MathVista ?
-
-
Mobile Application
-
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.