GPT-4.1, launched by OpenAI on April 14, 2025, introduces a 1 million token context window and supports outputs of up to 32,768 tokens per request. It delivers outstanding performance on coding tasks, achieving 54.6% on the SWE-Bench Verified benchmark, and shows a 10.5% improvement over GPT-4o on MultiChallenge for instruction following. The model's knowledge cutoff is set at June 2024. Pricing is $2.00 per million tokens for input and $8.00 per million tokens for output, with a 75% discount applied to cached inputs, making it highly cost-efficient for repeated queries.
GPT-4.1 | GPT-4.1 Nano | |
---|---|---|
Provider | ||
Web Site | ||
Release Date | Apr 14, 2025 3 weeks ago | Apr 14, 2025 3 weeks ago |
Modalities | text images | text images |
API Providers | OpenAI API | OpenAI API |
Knowledge Cut-off Date | - | - |
Open Source | No | No |
Pricing Input | $2.00 per million tokens | $0.10 per million tokens |
Pricing Output | $8.00 per million tokens | $0.40 per million tokens |
MMLU | 90.2% pass@1 Source | 80.1% Source |
MMLU Pro | - | - |
MMMU | 74.8% Source | 55.4% Source |
HellaSwag | - | - |
HumanEval | - | - |
MATH | - | - |
GPQA | 66.3% Diamond Source | 50.3% Diamond Source |
IFEval | - | 74.5% Source |
Array | - | - |
AIME 2024 | 48.1% Source | 29.4% Source |
AIME 2025 | - | - |
Array | - | - |
Array | - | - |
Array | 87.3% pass@1 Source | 66.9% Source |
Array | - | 56.2% Image Reasoning Source |
Mobile Application |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.