Llama 4 Maverick

LLaMA 4 Maverick is a cutting-edge multimodal model featuring 17 billion active parameters within a Mixture-of-Experts architecture of 128 experts, totaling 400 billion parameters. It leads its class by outperforming models like GPT-4o and Gemini 2.0 Flash across a wide range of benchmarks, and it matches DeepSeek V3 in reasoning and coding tasks—using less than half the active parameters. Designed for efficiency and scalability, Maverick delivers a best-in-class performance-to-cost ratio, with an experimental chat variant achieving an ELO score of 1417 on LMArena. Despite its scale, it runs on a single NVIDIA H100 host, ensuring simple and practical deployment.

o3-mini

The OpenAI o3-mini is a high-speed, cost-effective reasoning model designed for STEM applications, with strong performance in science, mathematics, and coding. Launched in January 2025, it includes essential developer features such as function calling, structured outputs, and developer messages. The model offers three reasoning effort levels—low, medium, and high—allowing users to optimize between deeper analysis and faster response times. Unlike the o3 model, it lacks vision capabilities. Initially available to select developers in API usage tiers 3-5, it can be accessed via the Chat Completions API, Assistants API, and Batch API.

Llama 4 Mavericko3-mini
Web Site ?
Provider ?
Chat ?
Release Date ?
Modalities ?
text ?
images ?
video ?
text ?
API Providers ?
Meta AI, Hugging Face, Fireworks, Together, DeepInfra
OpenAI API
Knowledge Cut-off Date ?
2024-08
Unknown
Open Source ?
Yes (Source)
No
Pricing Input ?
Not available
$1.10 per million tokens
Pricing Output ?
Not available
$4.40 per million tokens
MMLU ?
Not available
86.9%
pass@1, high effort
Source
MMLU-Pro ?
80.5%
Source
Not available
MMMU ?
73.4%
Source
Not available
HellaSwag ?
Not available
Not available
HumanEval ?
Not available
Not available
MATH ?
Not available
97.9%
pass@1, high effort
Source
GPQA ?
69.8%
Diamond
Source
79.7%
0-shot, high effort
Source
IFEval ?
Not available
Not available
SimpleQA ?
-
-
AIME 2024
-
-
AIME 2025
-
-
Aider Polyglot ?
-
-
LiveCodeBench v5 ?
-
-
Global MMLU (Lite) ?
-
-
MathVista ?
-
-
Mobile Application
-

VideoGameBench ?

Total score
0%
-
Doom II
0%
-
Dream DX
0%
-
Awakening DX
0%
-
Civilization I
0%
-
Pokemon Crystal
0%
-
The Need for Speed
0%
-
The Incredible Machine
0%
-
Secret Game 1
%0
-
Secret Game 2
0%
-
Secret Game 3
0%
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.