Llama 4 Scout

LLaMA 4 Scout is a 17-billion parameter model leveraging a Mixture-of-Experts architecture with 16 active experts, positioning it as the top multimodal model in its category. It consistently outperforms competitors like Gemma 3, Gemini 2.0 Flash-Lite, and Mistral 3.1 across diverse benchmark tasks. Despite its performance, LLaMA 4 Scout is remarkably efficient—capable of running on a single NVIDIA H100 GPU with Int4 quantization. It also boasts an industry-leading 10 million token context window and is natively multimodal, seamlessly processing text, images, and video inputs for advanced real-world applications.

Llama 3.1 Nemotron 70B Instruct

NVIDIA's Llama 3.1 Nemotron 70B is a powerful language model optimized for delivering accurate and informative responses. Built on the Llama 3.1 70B architecture and enhanced with Reinforcement Learning from Human Feedback (RLHF),it achieves top performance in automatic alignment benchmarks. Designed for applications demanding high precision in response generation and helpfulness, this model is well-suited for a wide range of user queries across multiple domains.

Llama 4 ScoutLlama 3.1 Nemotron 70B Instruct
Provider
Web Site
Release Date
Apr 05, 2025
2 weeks ago
Oct 15, 2023
1 year ago
Modalities
text ?
images ?
video ?
text ?
API Providers
Meta AI, Hugging Face, Fireworks, Together, DeepInfra
OpenRouter
Knowledge Cut-off Date
2025-04
-
Open Source
Yes (Source)
Yes
Pricing Input
Not available
$0.35 per million tokens
Pricing Output
Not available
$0.40 per million tokens
MMLU
Not available
85%
5-shot
Source
MMLU Pro
74.3%
Reasoning & Knowledge
Source
Not available
MMMU
69.4%
Image Reasoning
Source
Not available
HellaSwag
Not available
Not available
HumanEval
Not available
75%
Source
MATH
Not available
71%
Source
GPQA
57.2%
Diamond
Source
Not available
IFEval
Not available
Not available
Mobile Application
-
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.