Claude 3.7 Sonnet is Anthropic's most advanced model yet and the first hybrid reasoning AI on the market. It offers both standard and extended thinking modes, with the latter providing transparent, step-by-step reasoning. The model excels in coding and front-end web development, achieving state-of-the-art results on SWE-bench Verified and TAU-bench. Available via Claude.ai, the Anthropic API, Amazon Bedrock, and Google Cloud's Vertex AI, it sets a new benchmark for intelligent AI-driven problem-solving.
GPT-4.1 Nano, launched by OpenAI on April 14, 2025, is the company's fastest and most affordable model to date. Designed for low-latency tasks such as classification, autocomplete, and fast inference scenarios, it combines compact architecture with robust capabilities. Despite its size, it supports an impressive 1 million token context window and delivers strong benchmark results, achieving 80.1% on MMLU and 50.3% on GPQA. With a knowledge cutoff of June 2024, GPT-4.1 Nano offers exceptional value at just $0.10 per million input tokens and $0.40 per million output tokens, with a 75% discount applied to cached inputs, making it ideal for high-volume, cost-sensitive deployments.
Claude 3.7 Sonnet | GPT-4.1 Nano | |
---|---|---|
Web Site
| ||
Provider
| ||
Chat
| ||
Release Date
| ||
Modalities
| text images | text images |
API Providers
| Claude.ai, Anthropic API, Amazon Bedrock, Google Cloud Vertex AI | OpenAI API |
Knowledge Cut-off Date
| - | - |
Open Source
| No | No |
Pricing Input
| $3.00 per million tokens | $0.10 per million tokens |
Pricing Output
| $15.00 per million tokens | $0.40 per million tokens |
MMLU
| Not available | 80.1% Source |
MMLU-Pro
| Not available | - |
MMMU
| 71.8% Source | 55.4% Source |
HellaSwag
| Not available | - |
HumanEval
| Not available | - |
MATH
| 82.2% Source | - |
GPQA
| 68% Diamond Source | 50.3% Diamond Source |
IFEval
| 90.8% Source | 74.5% Source |
SimpleQA
| - | - |
AIME 2024 | - | 29.4% Source |
AIME 2025 | - | - |
Aider Polyglot
| - | - |
LiveCodeBench v5
| - | - |
Global MMLU (Lite)
| - | 66.9% Source |
MathVista
| - | 56.2% Image Reasoning Source |
Mobile Application | ||
VideoGameBench | ||
Total score | 0% | - |
Doom II | 0% | - |
Dream DX | 0% | - |
Awakening DX | 0% | - |
Civilization I | 0% | - |
Pokemon Crystal | 0% | - |
The Need for Speed | 0% | - |
The Incredible Machine | 0% | - |
Secret Game 1 | 0% | - |
Secret Game 2 | 0% | - |
Secret Game 3 | 0% | - |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.