Claude 3.7 Sonnet is Anthropic's most advanced model yet and the first hybrid reasoning AI on the market. It offers both standard and extended thinking modes, with the latter providing transparent, step-by-step reasoning. The model excels in coding and front-end web development, achieving state-of-the-art results on SWE-bench Verified and TAU-bench. Available via Claude.ai, the Anthropic API, Amazon Bedrock, and Google Cloud's Vertex AI, it sets a new benchmark for intelligent AI-driven problem-solving.
Llama 3.3 70B Instruct, created by Meta, is a multilingual large language model specifically fine-tuned for instruction-based tasks and optimized for conversational applications. It is capable of processing and generating text in multiple languages, with a context window supporting up to 128,000 tokens. Launched on December 6, 2024, the model surpasses numerous open-source and proprietary chat models in various industry benchmarks. It utilizes Grouped-Query Attention (GQA) to improve scalability and has been trained on a diverse dataset comprising over 15 trillion tokens from publicly available sources. The model's knowledge is current up to December 2023.
Claude 3.7 Sonnet | Llama 3.3 70B Instruct | |
---|---|---|
Web Site
| ||
Provider
| ||
Chat
| ||
Release Date
| ||
Modalities
| text images | text |
API Providers
| Claude.ai, Anthropic API, Amazon Bedrock, Google Cloud Vertex AI | Fireworks, Together, DeepInfra, Hyperbolic |
Knowledge Cut-off Date
| - | 12.2024 |
Open Source
| No | Yes |
Pricing Input
| $3.00 per million tokens | $0.23 per million tokens |
Pricing Output
| $15.00 per million tokens | $0.40 per million tokens |
MMLU
| Not available | 86% 0-shot, CoT Source |
MMLU-Pro
| Not available | 68.9% 5-shot, CoT Source |
MMMU
| 71.8% Source | Not available |
HellaSwag
| Not available | Not available |
HumanEval
| Not available | 88.4% pass@1 Source |
MATH
| 82.2% Source | 77% 0-shot, CoT Source |
GPQA
| 68% Diamond Source | 50.5% 0-shot, CoT Source |
IFEval
| 90.8% Source | 92.1% Source |
SimpleQA
| - | - |
AIME 2024 | - | - |
AIME 2025 | - | - |
Aider Polyglot
| - | - |
LiveCodeBench v5
| - | - |
Global MMLU (Lite)
| - | - |
MathVista
| - | - |
Mobile Application | - | |
VideoGameBench | ||
Total score | 0% | - |
Doom II | 0% | - |
Dream DX | 0% | - |
Awakening DX | 0% | - |
Civilization I | 0% | - |
Pokemon Crystal | 0% | - |
The Need for Speed | 0% | - |
The Incredible Machine | 0% | - |
Secret Game 1 | 0% | - |
Secret Game 2 | 0% | - |
Secret Game 3 | 0% | - |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.