LLaMA 4 Scout is a 17-billion parameter model leveraging a Mixture-of-Experts architecture with 16 active experts, positioning it as the top multimodal model in its category. It consistently outperforms competitors like Gemma 3, Gemini 2.0 Flash-Lite, and Mistral 3.1 across diverse benchmark tasks. Despite its performance, LLaMA 4 Scout is remarkably efficient—capable of running on a single NVIDIA H100 GPU with Int4 quantization. It also boasts an industry-leading 10 million token context window and is natively multimodal, seamlessly processing text, images, and video inputs for advanced real-world applications.
Qwen 3 | Llama 4 Scout | |
---|---|---|
Provider | ||
Web Site | ||
Release Date | Apr 29, 2025 3 weeks ago | Apr 05, 2025 1 month ago |
Modalities | - | text images video |
API Providers | - | Meta AI, Hugging Face, Fireworks, Together, DeepInfra |
Knowledge Cut-off Date | - | 2025-04 |
Open Source | Yes (Source) | Yes (Source) |
Pricing Input | - | Not available |
Pricing Output | - | Not available |
MMLU | - | Not available |
MMLU Pro | - | 74.3% Reasoning & Knowledge Source |
MMMU | - | 69.4% Image Reasoning Source |
HellaSwag | - | Not available |
HumanEval | - | Not available |
MATH | - | Not available |
GPQA | - | 57.2% Diamond Source |
IFEval | - | Not available |
Array | - | - |
AIME 2024 | Source | - |
AIME 2025 | Source | - |
Array | - | - |
Array | - | - |
Array | - | - |
Array | - | - |
Mobile Application | - | - |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.