o3-mini

The OpenAI o3-mini is a high-speed, cost-effective reasoning model designed for STEM applications, with strong performance in science, mathematics, and coding. Launched in January 2025, it includes essential developer features such as function calling, structured outputs, and developer messages. The model offers three reasoning effort levels—low, medium, and high—allowing users to optimize between deeper analysis and faster response times. Unlike the o3 model, it lacks vision capabilities. Initially available to select developers in API usage tiers 3-5, it can be accessed via the Chat Completions API, Assistants API, and Batch API.

GPT-4.1 Nano

GPT-4.1 Nano, launched by OpenAI on April 14, 2025, is the company's fastest and most affordable model to date. Designed for low-latency tasks such as classification, autocomplete, and fast inference scenarios, it combines compact architecture with robust capabilities. Despite its size, it supports an impressive 1 million token context window and delivers strong benchmark results, achieving 80.1% on MMLU and 50.3% on GPQA. With a knowledge cutoff of June 2024, GPT-4.1 Nano offers exceptional value at just $0.10 per million input tokens and $0.40 per million output tokens, with a 75% discount applied to cached inputs, making it ideal for high-volume, cost-sensitive deployments.

o3-miniGPT-4.1 Nano
Web Site ?
Provider ?
Chat ?
Release Date ?
Modalities ?
text ?
text ?
images ?
API Providers ?
OpenAI API
OpenAI API
Knowledge Cut-off Date ?
Unknown
-
Open Source ?
No
No
Pricing Input ?
$1.10 per million tokens
$0.10 per million tokens
Pricing Output ?
$4.40 per million tokens
$0.40 per million tokens
MMLU ?
86.9%
pass@1, high effort
Source
80.1%
Source
MMLU-Pro ?
Not available
-
MMMU ?
Not available
55.4%
Source
HellaSwag ?
Not available
-
HumanEval ?
Not available
-
MATH ?
97.9%
pass@1, high effort
Source
-
GPQA ?
79.7%
0-shot, high effort
Source
50.3%
Diamond
Source
IFEval ?
Not available
74.5%
Source
SimpleQA ?
-
-
AIME 2024
-
29.4%
Source
AIME 2025
-
-
Aider Polyglot ?
-
-
LiveCodeBench v5 ?
-
-
Global MMLU (Lite) ?
-
66.9%
Source
MathVista ?
-
56.2%
Image Reasoning
Source
Mobile Application

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.