The OpenAI o3-mini is a high-speed, cost-effective reasoning model designed for STEM applications, with strong performance in science, mathematics, and coding. Launched in January 2025, it includes essential developer features such as function calling, structured outputs, and developer messages. The model offers three reasoning effort levels—low, medium, and high—allowing users to optimize between deeper analysis and faster response times. Unlike the o3 model, it lacks vision capabilities. Initially available to select developers in API usage tiers 3-5, it can be accessed via the Chat Completions API, Assistants API, and Batch API.
Claude 3.7 Sonnet is Anthropic's most advanced model yet and the first hybrid reasoning AI on the market. It offers both standard and extended thinking modes, with the latter providing transparent, step-by-step reasoning. The model excels in coding and front-end web development, achieving state-of-the-art results on SWE-bench Verified and TAU-bench. Available via Claude.ai, the Anthropic API, Amazon Bedrock, and Google Cloud's Vertex AI, it sets a new benchmark for intelligent AI-driven problem-solving.
o3-mini | Claude 3.7 Sonnet | |
---|---|---|
Web Site
| ||
Provider
| ||
Chat
| ||
Release Date
| ||
Modalities
| text | text images |
API Providers
| OpenAI API | Claude.ai, Anthropic API, Amazon Bedrock, Google Cloud Vertex AI |
Knowledge Cut-off Date
| Unknown | - |
Open Source
| No | No |
Pricing Input
| $1.10 per million tokens | $3.00 per million tokens |
Pricing Output
| $4.40 per million tokens | $15.00 per million tokens |
MMLU
| 86.9% pass@1, high effort Source | Not available |
MMLU-Pro
| Not available | Not available |
MMMU
| Not available | 71.8% Source |
HellaSwag
| Not available | Not available |
HumanEval
| Not available | Not available |
MATH
| 97.9% pass@1, high effort Source | 82.2% Source |
GPQA
| 79.7% 0-shot, high effort Source | 68% Diamond Source |
IFEval
| Not available | 90.8% Source |
SimpleQA
| - | - |
AIME 2024 | - | - |
AIME 2025 | - | - |
Aider Polyglot
| - | - |
LiveCodeBench v5
| - | - |
Global MMLU (Lite)
| - | - |
MathVista
| - | - |
Mobile Application | ||
VideoGameBench | ||
Total score | - | 0% |
Doom II | - | 0% |
Dream DX | - | 0% |
Awakening DX | - | 0% |
Civilization I | - | 0% |
Pokemon Crystal | - | 0% |
The Need for Speed | - | 0% |
The Incredible Machine | - | 0% |
Secret Game 1 | - | 0% |
Secret Game 2 | - | 0% |
Secret Game 3 | - | 0% |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.