o3

OpenAI o3 is the most advanced reasoning model from OpenAI, purpose-built for handling complex, high-cognition tasks. Launched in April 2025, it delivers exceptional performance in software engineering, mathematics, and scientific problem-solving. The model introduces three levels of reasoning effort—low, medium, and high—allowing users to balance between latency and depth of reasoning based on task complexity. o3 supports essential tools for developers, including function calling, structured outputs, and system-level messaging. With built-in vision capabilities, o3 can interpret and analyze images, making it suitable for multimodal applications. It’s available through Chat Completions API, Assistants API, and Batch API for flexible integration into enterprise and research workflows.

o3-mini

The OpenAI o3-mini is a high-speed, cost-effective reasoning model designed for STEM applications, with strong performance in science, mathematics, and coding. Launched in January 2025, it includes essential developer features such as function calling, structured outputs, and developer messages. The model offers three reasoning effort levels—low, medium, and high—allowing users to optimize between deeper analysis and faster response times. Unlike the o3 model, it lacks vision capabilities. Initially available to select developers in API usage tiers 3-5, it can be accessed via the Chat Completions API, Assistants API, and Batch API.

o3o3-mini
Web Site ?
Provider ?
Chat ?
Release Date ?
Modalities ?
text ?
images ?
text ?
API Providers ?
OpenAI API
OpenAI API
Knowledge Cut-off Date ?
-
Unknown
Open Source ?
No
No
Pricing Input ?
$10.00 per million tokens
$1.10 per million tokens
Pricing Output ?
$40.00 per million tokens
$4.40 per million tokens
MMLU ?
82.9%
Source
86.9%
pass@1, high effort
Source
MMLU-Pro ?
-
Not available
MMMU ?
-
Not available
HellaSwag ?
-
Not available
HumanEval ?
-
Not available
MATH ?
-
97.9%
pass@1, high effort
Source
GPQA ?
83.3%
Diamond, no tools
Source
79.7%
0-shot, high effort
Source
IFEval ?
-
Not available
SimpleQA ?
-
-
AIME 2024
91.6%
Source
-
AIME 2025
88.9%
Source
-
Aider Polyglot ?
-
-
LiveCodeBench v5 ?
-
-
Global MMLU (Lite) ?
-
-
MathVista ?
-
-
Mobile Application

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.