LLaMA 4 Maverick is a cutting-edge multimodal model featuring 17 billion active parameters within a Mixture-of-Experts architecture of 128 experts, totaling 400 billion parameters. It leads its class by outperforming models like GPT-4o and Gemini 2.0 Flash across a wide range of benchmarks, and it matches DeepSeek V3 in reasoning and coding tasks—using less than half the active parameters. Designed for efficiency and scalability, Maverick delivers a best-in-class performance-to-cost ratio, with an experimental chat variant achieving an ELO score of 1417 on LMArena. Despite its scale, it runs on a single NVIDIA H100 host, ensuring simple and practical deployment.
Claude 3.7 Sonnet is Anthropic's most advanced model yet and the first hybrid reasoning AI on the market. It offers both standard and extended thinking modes, with the latter providing transparent, step-by-step reasoning. The model excels in coding and front-end web development, achieving state-of-the-art results on SWE-bench Verified and TAU-bench. Available via Claude.ai, the Anthropic API, Amazon Bedrock, and Google Cloud's Vertex AI, it sets a new benchmark for intelligent AI-driven problem-solving.
Llama 4 Maverick | Claude 3.7 Sonnet | |
---|---|---|
Web Site
| ||
Provider
| ||
Chat
| ||
Release Date
| ||
Modalities
| text images video | text images |
API Providers
| Meta AI, Hugging Face, Fireworks, Together, DeepInfra | Claude.ai, Anthropic API, Amazon Bedrock, Google Cloud Vertex AI |
Knowledge Cut-off Date
| 2024-08 | - |
Open Source
| Yes (Source) | No |
Pricing Input
| Not available | $3.00 per million tokens |
Pricing Output
| Not available | $15.00 per million tokens |
MMLU
| Not available | Not available |
MMLU-Pro
| 80.5% Source | Not available |
MMMU
| 73.4% Source | 71.8% Source |
HellaSwag
| Not available | Not available |
HumanEval
| Not available | Not available |
MATH
| Not available | 82.2% Source |
GPQA
| 69.8% Diamond Source | 68% Diamond Source |
IFEval
| Not available | 90.8% Source |
SimpleQA
| - | - |
AIME 2024 | - | - |
AIME 2025 | - | - |
Aider Polyglot
| - | - |
LiveCodeBench v5
| - | - |
Global MMLU (Lite)
| - | - |
MathVista
| - | - |
Mobile Application | - | |
VideoGameBench | ||
Total score | 0% | 0% |
Doom II | 0% | 0% |
Dream DX | 0% | 0% |
Awakening DX | 0% | 0% |
Civilization I | 0% | 0% |
Pokemon Crystal | 0% | 0% |
The Need for Speed | 0% | 0% |
The Incredible Machine | 0% | 0% |
Secret Game 1 | %0 | 0% |
Secret Game 2 | 0% | 0% |
Secret Game 3 | 0% | 0% |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.