Llama 4 Scout

LLaMA 4 Scout is a 17-billion parameter model leveraging a Mixture-of-Experts architecture with 16 active experts, positioning it as the top multimodal model in its category. It consistently outperforms competitors like Gemma 3, Gemini 2.0 Flash-Lite, and Mistral 3.1 across diverse benchmark tasks. Despite its performance, LLaMA 4 Scout is remarkably efficient—capable of running on a single NVIDIA H100 GPU with Int4 quantization. It also boasts an industry-leading 10 million token context window and is natively multimodal, seamlessly processing text, images, and video inputs for advanced real-world applications.

Claude 3.5 Haiku

Claude 3.5 Haiku, developed by Anthropic, offers a 200,000-token context window. Pricing is set at $1 per million input tokens and $5 per million output tokens, with potential savings of up to 90% through prompt caching and 50% via the Message Batches API. Released on November 4, 2024, this model excels in code completion, interactive chatbots, data extraction and labeling, as well as real-time content moderation.

Llama 4 ScoutClaude 3.5 Haiku
Provider
Web Site
Release Date
Apr 05, 2025
2 weeks ago
Nov 04, 2024
5 months ago
Modalities
text ?
images ?
video ?
text ?
API Providers
Meta AI, Hugging Face, Fireworks, Together, DeepInfra
Anthropic, AWS Bedrock, Vertex AI
Knowledge Cut-off Date
2025-04
01.04.2024
Open Source
Yes (Source)
No
Pricing Input
Not available
$0.80 per million tokens
Pricing Output
Not available
$4.00
MMLU
Not available
Not available
MMLU Pro
74.3%
Reasoning & Knowledge
Source
65%
0-shot CoT
Source
MMMU
69.4%
Image Reasoning
Source
Not available
HellaSwag
Not available
Not available
HumanEval
Not available
88.1%
0-shot
Source
MATH
Not available
69.4%
0-shot CoT
Source
GPQA
57.2%
Diamond
Source
Not available
IFEval
Not available
Not available
Mobile Application
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.