LLaMA 4 Maverick is a cutting-edge multimodal model featuring 17 billion active parameters within a Mixture-of-Experts architecture of 128 experts, totaling 400 billion parameters. It leads its class by outperforming models like GPT-4o and Gemini 2.0 Flash across a wide range of benchmarks, and it matches DeepSeek V3 in reasoning and coding tasks—using less than half the active parameters. Designed for efficiency and scalability, Maverick delivers a best-in-class performance-to-cost ratio, with an experimental chat variant achieving an ELO score of 1417 on LMArena. Despite its scale, it runs on a single NVIDIA H100 host, ensuring simple and practical deployment.
GPT-4.1 Nano, launched by OpenAI on April 14, 2025, is the company's fastest and most affordable model to date. Designed for low-latency tasks such as classification, autocomplete, and fast inference scenarios, it combines compact architecture with robust capabilities. Despite its size, it supports an impressive 1 million token context window and delivers strong benchmark results, achieving 80.1% on MMLU and 50.3% on GPQA. With a knowledge cutoff of June 2024, GPT-4.1 Nano offers exceptional value at just $0.10 per million input tokens and $0.40 per million output tokens, with a 75% discount applied to cached inputs, making it ideal for high-volume, cost-sensitive deployments.
Llama 4 Maverick | GPT-4.1 Nano | |
---|---|---|
Web Site
| ||
Provider
| ||
Chat
| ||
Release Date
| ||
Modalities
| text images video | text images |
API Providers
| Meta AI, Hugging Face, Fireworks, Together, DeepInfra | OpenAI API |
Knowledge Cut-off Date
| 2024-08 | - |
Open Source
| Yes (Source) | No |
Pricing Input
| Not available | $0.10 per million tokens |
Pricing Output
| Not available | $0.40 per million tokens |
MMLU
| Not available | 80.1% Source |
MMLU-Pro
| 80.5% Source | - |
MMMU
| 73.4% Source | 55.4% Source |
HellaSwag
| Not available | - |
HumanEval
| Not available | - |
MATH
| Not available | - |
GPQA
| 69.8% Diamond Source | 50.3% Diamond Source |
IFEval
| Not available | 74.5% Source |
SimpleQA
| - | - |
AIME 2024 | - | 29.4% Source |
AIME 2025 | - | - |
Aider Polyglot
| - | - |
LiveCodeBench v5
| - | - |
Global MMLU (Lite)
| - | 66.9% Source |
MathVista
| - | 56.2% Image Reasoning Source |
Mobile Application | - | |
VideoGameBench | ||
Total score | 0% | - |
Doom II | 0% | - |
Dream DX | 0% | - |
Awakening DX | 0% | - |
Civilization I | 0% | - |
Pokemon Crystal | 0% | - |
The Need for Speed | 0% | - |
The Incredible Machine | 0% | - |
Secret Game 1 | %0 | - |
Secret Game 2 | 0% | - |
Secret Game 3 | 0% | - |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.