o4-mini

OpenAI o4-mini is the newest lightweight model in the o-series, engineered for efficient and capable reasoning across text and visual tasks. Optimized for speed and performance, it excels in code generation and image-based understanding, while maintaining a balance between latency and reasoning depth. The model supports a 200,000-token context window with up to 100,000 output tokens, making it suitable for extended, high-volume interactions. It handles both text and image inputs, producing textual outputs with advanced reasoning capabilities. With its compact architecture and versatile performance, o4-mini is ideal for a wide array of real-world applications demanding fast, cost-effective intelligence.

Mistral Large 2

Mistral Large 2, developed by Mistral, offers a 128K-token context window and is priced at $3.00 per million input tokens and $9.00 per million output tokens. Released on July 24, 2024, the model scored 84.0 on the MMLU benchmark in a 5-shot evaluation, demonstrating strong performance in diverse tasks.

o4-miniMistral Large 2
Web Site ?
Provider ?
Chat ?
Release Date ?
Modalities ?
text ?
images ?
text ?
API Providers ?
OpenAI API
Azure AI, AWS Bedrock, Google AI Studio, Vertex AI, Snowflake Cortex
Knowledge Cut-off Date ?
-
Unknown
Open Source ?
No
Yes
Pricing Input ?
$1.10 per million tokens
$3.00 per million tokens
Pricing Output ?
$4.40 per million tokens
$9.00 per million tokens
MMLU ?
fort
84%
5-shot
Source
MMLU-Pro ?
-
50.69%
Source
MMMU ?
81.6%
Source
Not available
HellaSwag ?
-
Not available
HumanEval ?
14.28%
Source
Not available
MATH ?
-
1.13%
Source
GPQA ?
81.4%
Source
24.94%
IFEval ?
-
84.01%
SimpleQA ?
-
-
AIME 2024
93.4%
Source
-
AIME 2025
92.7%
Source
-
Aider Polyglot ?
-
-
LiveCodeBench v5 ?
-
-
Global MMLU (Lite) ?
-
-
MathVista ?
-
-
Mobile Application
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.