GPT-4.1 Nano

GPT-4.1 Nano, launched by OpenAI on April 14, 2025, is the company's fastest and most affordable model to date. Designed for low-latency tasks such as classification, autocomplete, and fast inference scenarios, it combines compact architecture with robust capabilities. Despite its size, it supports an impressive 1 million token context window and delivers strong benchmark results, achieving 80.1% on MMLU and 50.3% on GPQA. With a knowledge cutoff of June 2024, GPT-4.1 Nano offers exceptional value at just $0.10 per million input tokens and $0.40 per million output tokens, with a 75% discount applied to cached inputs, making it ideal for high-volume, cost-sensitive deployments.

Mistral Large 2

Mistral Large 2, developed by Mistral, offers a 128K-token context window and is priced at $3.00 per million input tokens and $9.00 per million output tokens. Released on July 24, 2024, the model scored 84.0 on the MMLU benchmark in a 5-shot evaluation, demonstrating strong performance in diverse tasks.

GPT-4.1 NanoMistral Large 2
Web Site ?
Provider ?
Chat ?
Release Date ?
Modalities ?
text ?
images ?
text ?
API Providers ?
OpenAI API
Azure AI, AWS Bedrock, Google AI Studio, Vertex AI, Snowflake Cortex
Knowledge Cut-off Date ?
-
Unknown
Open Source ?
No
Yes
Pricing Input ?
$0.10 per million tokens
$3.00 per million tokens
Pricing Output ?
$0.40 per million tokens
$9.00 per million tokens
MMLU ?
80.1%
Source
84%
5-shot
Source
MMLU-Pro ?
-
50.69%
Source
MMMU ?
55.4%
Source
Not available
HellaSwag ?
-
Not available
HumanEval ?
-
Not available
MATH ?
-
1.13%
Source
GPQA ?
50.3%
Diamond
Source
24.94%
IFEval ?
74.5%
Source
84.01%
SimpleQA ?
-
-
AIME 2024
29.4%
Source
-
AIME 2025
-
-
Aider Polyglot ?
-
-
LiveCodeBench v5 ?
-
-
Global MMLU (Lite) ?
66.9%
Source
-
MathVista ?
56.2%
Image Reasoning
Source
-
Mobile Application
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.