GPT-4.1

GPT-4.1, launched by OpenAI on April 14, 2025, introduces a 1 million token context window and supports outputs of up to 32,768 tokens per request. It delivers outstanding performance on coding tasks, achieving 54.6% on the SWE-Bench Verified benchmark, and shows a 10.5% improvement over GPT-4o on MultiChallenge for instruction following. The model's knowledge cutoff is set at June 2024. Pricing is $2.00 per million tokens for input and $8.00 per million tokens for output, with a 75% discount applied to cached inputs, making it highly cost-efficient for repeated queries.

Llama 3.1 Nemotron 70B Instruct

NVIDIA's Llama 3.1 Nemotron 70B is a powerful language model optimized for delivering accurate and informative responses. Built on the Llama 3.1 70B architecture and enhanced with Reinforcement Learning from Human Feedback (RLHF),it achieves top performance in automatic alignment benchmarks. Designed for applications demanding high precision in response generation and helpfulness, this model is well-suited for a wide range of user queries across multiple domains.

GPT-4.1Llama 3.1 Nemotron 70B Instruct
Web Site ?
Provider ?
Chat ?
Release Date ?
Modalities ?
text ?
images ?
text ?
API Providers ?
OpenAI API
OpenRouter
Knowledge Cut-off Date ?
-
-
Open Source ?
No
Yes
Pricing Input ?
$2.00 per million tokens
$0.35 per million tokens
Pricing Output ?
$8.00 per million tokens
$0.40 per million tokens
MMLU ?
90.2%
pass@1
Source
85%
5-shot
Source
MMLU-Pro ?
-
Not available
MMMU ?
74.8%
Source
Not available
HellaSwag ?
-
Not available
HumanEval ?
-
75%
Source
MATH ?
-
71%
Source
GPQA ?
66.3%
Diamond
Source
Not available
IFEval ?
-
Not available
SimpleQA ?
-
-
AIME 2024
48.1%
Source
-
AIME 2025
-
-
Aider Polyglot ?
-
-
LiveCodeBench v5 ?
-
-
Global MMLU (Lite) ?
87.3%
pass@1
Source
-
MathVista ?
-
-
Mobile Application
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.