Llama 4 Maverick

LLaMA 4 Maverick is a cutting-edge multimodal model featuring 17 billion active parameters within a Mixture-of-Experts architecture of 128 experts, totaling 400 billion parameters. It leads its class by outperforming models like GPT-4o and Gemini 2.0 Flash across a wide range of benchmarks, and it matches DeepSeek V3 in reasoning and coding tasks—using less than half the active parameters. Designed for efficiency and scalability, Maverick delivers a best-in-class performance-to-cost ratio, with an experimental chat variant achieving an ELO score of 1417 on LMArena. Despite its scale, it runs on a single NVIDIA H100 host, ensuring simple and practical deployment.

GPT-4.1

GPT-4.1, launched by OpenAI on April 14, 2025, introduces a 1 million token context window and supports outputs of up to 32,768 tokens per request. It delivers outstanding performance on coding tasks, achieving 54.6% on the SWE-Bench Verified benchmark, and shows a 10.5% improvement over GPT-4o on MultiChallenge for instruction following. The model's knowledge cutoff is set at June 2024. Pricing is $2.00 per million tokens for input and $8.00 per million tokens for output, with a 75% discount applied to cached inputs, making it highly cost-efficient for repeated queries.

Llama 4 MaverickGPT-4.1
Provider
Web Site
Release Date
Apr 05, 2025
3 weeks ago
Apr 14, 2025
2 weeks ago
Modalities
text ?
images ?
video ?
text ?
images ?
API Providers
Meta AI, Hugging Face, Fireworks, Together, DeepInfra
OpenAI API
Knowledge Cut-off Date
2024-08
-
Open Source
Yes (Source)
No
Pricing Input
Not available
$2.00 per million tokens
Pricing Output
Not available
$8.00 per million tokens
MMLU
Not available
90.2%
pass@1
Source
MMLU Pro
80.5%
Source
-
MMMU
73.4%
Source
74.8%
Source
HellaSwag
Not available
-
HumanEval
Not available
-
MATH
Not available
-
GPQA
69.8%
Diamond
Source
66.3%
Diamond
Source
IFEval
Not available
-
Mobile Application
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.