o3-mini

The OpenAI o3-mini is a high-speed, cost-effective reasoning model designed for STEM applications, with strong performance in science, mathematics, and coding. Launched in January 2025, it includes essential developer features such as function calling, structured outputs, and developer messages. The model offers three reasoning effort levels—low, medium, and high—allowing users to optimize between deeper analysis and faster response times. Unlike the o3 model, it lacks vision capabilities. Initially available to select developers in API usage tiers 3-5, it can be accessed via the Chat Completions API, Assistants API, and Batch API.

Qwen2.5-VL-32B

Over the past five months since the release of Qwen2-VL, developers have built new models based on it, contributing valuable feedback. Now, Qwen2.5-VL introduces enhanced capabilities, including precise analysis of images, text, and charts, as well as object localization with structured JSON outputs. It understands long videos, identifies key events, and functions as an agent, interacting with tools on computers and phones. The model's architecture features dynamic video processing and an optimized ViT encoder for improved speed and accuracy.

o3-miniQwen2.5-VL-32B
Provider
Web Site
Release Date
Jan 31, 2025
3 months ago
Mar 25, 2025
2 months ago
Modalities
text ?
text ?
images ?
video ?
API Providers
OpenAI API
-
Knowledge Cut-off Date
Unknown
Unknown
Open Source
No
Yes (Source)
Pricing Input
$1.10 per million tokens
$0
Pricing Output
$4.40 per million tokens
$0
MMLU
86.9%
pass@1, high effort
Source
78.4%
Source
MMLU Pro
Not available
49.5%
MMMU
Not available
70%
HellaSwag
Not available
Not available
HumanEval
Not available
Not available
MATH
97.9%
pass@1, high effort
Source
82.2%
GPQA
79.7%
0-shot, high effort
Source
46.0%
Diamond
IFEval
Not available
Not available
Array
-
-
AIME 2024
-
-
AIME 2025
-
-
Array
-
-
Array
-
-
Array
-
-
Array
-
-
Mobile Application
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.