o3

OpenAI o3 is the most advanced reasoning model from OpenAI, purpose-built for handling complex, high-cognition tasks. Launched in April 2025, it delivers exceptional performance in software engineering, mathematics, and scientific problem-solving. The model introduces three levels of reasoning effort—low, medium, and high—allowing users to balance between latency and depth of reasoning based on task complexity. o3 supports essential tools for developers, including function calling, structured outputs, and system-level messaging. With built-in vision capabilities, o3 can interpret and analyze images, making it suitable for multimodal applications. It’s available through Chat Completions API, Assistants API, and Batch API for flexible integration into enterprise and research workflows.

Qwen2.5-VL-32B

Over the past five months since the release of Qwen2-VL, developers have built new models based on it, contributing valuable feedback. Now, Qwen2.5-VL introduces enhanced capabilities, including precise analysis of images, text, and charts, as well as object localization with structured JSON outputs. It understands long videos, identifies key events, and functions as an agent, interacting with tools on computers and phones. The model's architecture features dynamic video processing and an optimized ViT encoder for improved speed and accuracy.

o3Qwen2.5-VL-32B
Provider
Web Site
Release Date
Apr 16, 2025
1 month ago
Mar 25, 2025
2 months ago
Modalities
text ?
images ?
text ?
images ?
video ?
API Providers
OpenAI API
-
Knowledge Cut-off Date
-
Unknown
Open Source
No
Yes (Source)
Pricing Input
$10.00 per million tokens
$0
Pricing Output
$40.00 per million tokens
$0
MMLU
82.9%
Source
78.4%
Source
MMLU Pro
-
49.5%
MMMU
-
70%
HellaSwag
-
Not available
HumanEval
-
Not available
MATH
-
82.2%
GPQA
83.3%
Diamond, no tools
Source
46.0%
Diamond
IFEval
-
Not available
Array
-
-
AIME 2024
91.6%
Source
-
AIME 2025
88.9%
Source
-
Array
-
-
Array
-
-
Array
-
-
Array
-
-
Mobile Application
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.