LLaMA 4 Maverick is a cutting-edge multimodal model featuring 17 billion active parameters within a Mixture-of-Experts architecture of 128 experts, totaling 400 billion parameters. It leads its class by outperforming models like GPT-4o and Gemini 2.0 Flash across a wide range of benchmarks, and it matches DeepSeek V3 in reasoning and coding tasks—using less than half the active parameters. Designed for efficiency and scalability, Maverick delivers a best-in-class performance-to-cost ratio, with an experimental chat variant achieving an ELO score of 1417 on LMArena. Despite its scale, it runs on a single NVIDIA H100 host, ensuring simple and practical deployment.
GPT-4.1, launched by OpenAI on April 14, 2025, introduces a 1 million token context window and supports outputs of up to 32,768 tokens per request. It delivers outstanding performance on coding tasks, achieving 54.6% on the SWE-Bench Verified benchmark, and shows a 10.5% improvement over GPT-4o on MultiChallenge for instruction following. The model's knowledge cutoff is set at June 2024. Pricing is $2.00 per million tokens for input and $8.00 per million tokens for output, with a 75% discount applied to cached inputs, making it highly cost-efficient for repeated queries.
Llama 4 Maverick | GPT-4.1 | |
---|---|---|
Web Site
| ||
Provider
| ||
Chat
| ||
Release Date
| ||
Modalities
| text images video | text images |
API Providers
| Meta AI, Hugging Face, Fireworks, Together, DeepInfra | OpenAI API |
Knowledge Cut-off Date
| 2024-08 | - |
Open Source
| Yes (Source) | No |
Pricing Input
| Not available | $2.00 per million tokens |
Pricing Output
| Not available | $8.00 per million tokens |
MMLU
| Not available | 90.2% pass@1 Source |
MMLU-Pro
| 80.5% Source | - |
MMMU
| 73.4% Source | 74.8% Source |
HellaSwag
| Not available | - |
HumanEval
| Not available | - |
MATH
| Not available | - |
GPQA
| 69.8% Diamond Source | 66.3% Diamond Source |
IFEval
| Not available | - |
SimpleQA
| - | - |
AIME 2024 | - | 48.1% Source |
AIME 2025 | - | - |
Aider Polyglot
| - | - |
LiveCodeBench v5
| - | - |
Global MMLU (Lite)
| - | 87.3% pass@1 Source |
MathVista
| - | - |
Mobile Application | - | |
VideoGameBench | ||
Total score | 0% | - |
Doom II | 0% | - |
Dream DX | 0% | - |
Awakening DX | 0% | - |
Civilization I | 0% | - |
Pokemon Crystal | 0% | - |
The Need for Speed | 0% | - |
The Incredible Machine | 0% | - |
Secret Game 1 | %0 | - |
Secret Game 2 | 0% | - |
Secret Game 3 | 0% | - |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.