LLaMA 4 Scout is a 17-billion parameter model leveraging a Mixture-of-Experts architecture with 16 active experts, positioning it as the top multimodal model in its category. It consistently outperforms competitors like Gemma 3, Gemini 2.0 Flash-Lite, and Mistral 3.1 across diverse benchmark tasks. Despite its performance, LLaMA 4 Scout is remarkably efficient—capable of running on a single NVIDIA H100 GPU with Int4 quantization. It also boasts an industry-leading 10 million token context window and is natively multimodal, seamlessly processing text, images, and video inputs for advanced real-world applications.
GPT-4.1, launched by OpenAI on April 14, 2025, introduces a 1 million token context window and supports outputs of up to 32,768 tokens per request. It delivers outstanding performance on coding tasks, achieving 54.6% on the SWE-Bench Verified benchmark, and shows a 10.5% improvement over GPT-4o on MultiChallenge for instruction following. The model's knowledge cutoff is set at June 2024. Pricing is $2.00 per million tokens for input and $8.00 per million tokens for output, with a 75% discount applied to cached inputs, making it highly cost-efficient for repeated queries.
Llama 4 Scout | GPT-4.1 | |
---|---|---|
Web Site
| ||
Provider
| ||
Chat
| ||
Release Date
| ||
Modalities
| text images video | text images |
API Providers
| Meta AI, Hugging Face, Fireworks, Together, DeepInfra | OpenAI API |
Knowledge Cut-off Date
| 2025-04 | - |
Open Source
| Yes (Source) | No |
Pricing Input
| Not available | $2.00 per million tokens |
Pricing Output
| Not available | $8.00 per million tokens |
MMLU
| Not available | 90.2% pass@1 Source |
MMLU-Pro
| 74.3% Reasoning & Knowledge Source | - |
MMMU
| 69.4% Image Reasoning Source | 74.8% Source |
HellaSwag
| Not available | - |
HumanEval
| Not available | - |
MATH
| Not available | - |
GPQA
| 57.2% Diamond Source | 66.3% Diamond Source |
IFEval
| Not available | - |
SimpleQA
| - | - |
AIME 2024 | - | 48.1% Source |
AIME 2025 | - | - |
Aider Polyglot
| - | - |
LiveCodeBench v5
| - | - |
Global MMLU (Lite)
| - | 87.3% pass@1 Source |
MathVista
| - | - |
Mobile Application | - |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.