Llama 4 Scout

LLaMA 4 Scout is a 17-billion parameter model leveraging a Mixture-of-Experts architecture with 16 active experts, positioning it as the top multimodal model in its category. It consistently outperforms competitors like Gemma 3, Gemini 2.0 Flash-Lite, and Mistral 3.1 across diverse benchmark tasks. Despite its performance, LLaMA 4 Scout is remarkably efficient—capable of running on a single NVIDIA H100 GPU with Int4 quantization. It also boasts an industry-leading 10 million token context window and is natively multimodal, seamlessly processing text, images, and video inputs for advanced real-world applications.

DeepSeek-R1

DeepSeek-R1 is a 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token, trained via large-scale reinforcement learning with a focus on reasoning capabilities. It incorporates two RL stages for discovering improved reasoning patterns and aligning with human preferences, along with two SFT stages for seeding reasoning and non-reasoning capabilities. The model achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.

Llama 4 ScoutDeepSeek-R1
Provider
Web Site
Release Date
Apr 05, 2025
2 weeks ago
Jan 21, 2025
3 months ago
Modalities
text ?
images ?
video ?
text ?
API Providers
Meta AI, Hugging Face, Fireworks, Together, DeepInfra
DeepSeek, HuggingFace
Knowledge Cut-off Date
2025-04
Unknown
Open Source
Yes (Source)
Yes
Pricing Input
Not available
$0.55 per million tokens
Pricing Output
Not available
$2.19 per million tokens
MMLU
Not available
90.8%
Pass@1
Source
MMLU Pro
74.3%
Reasoning & Knowledge
Source
84%
EM
Source
MMMU
69.4%
Image Reasoning
Source
-
HellaSwag
Not available
-
HumanEval
Not available
-
MATH
Not available
-
GPQA
57.2%
Diamond
Source
71.5%
Pass@1
Source
IFEval
Not available
83.3%
Prompt Strict
Source
Mobile Application
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.