GPT-4.1

GPT-4.1, launched by OpenAI on April 14, 2025, introduces a 1 million token context window and supports outputs of up to 32,768 tokens per request. It delivers outstanding performance on coding tasks, achieving 54.6% on the SWE-Bench Verified benchmark, and shows a 10.5% improvement over GPT-4o on MultiChallenge for instruction following. The model's knowledge cutoff is set at June 2024. Pricing is $2.00 per million tokens for input and $8.00 per million tokens for output, with a 75% discount applied to cached inputs, making it highly cost-efficient for repeated queries.

Llama 3.3 70B Instruct

Llama 3.3 70B Instruct, created by Meta, is a multilingual large language model specifically fine-tuned for instruction-based tasks and optimized for conversational applications. It is capable of processing and generating text in multiple languages, with a context window supporting up to 128,000 tokens. Launched on December 6, 2024, the model surpasses numerous open-source and proprietary chat models in various industry benchmarks. It utilizes Grouped-Query Attention (GQA) to improve scalability and has been trained on a diverse dataset comprising over 15 trillion tokens from publicly available sources. The model's knowledge is current up to December 2023.

GPT-4.1Llama 3.3 70B Instruct
Provider
Web Site
Release Date
Apr 14, 2025
2 weeks ago
Dec 06, 2024
4 months ago
Modalities
text ?
images ?
text ?
API Providers
OpenAI API
Fireworks, Together, DeepInfra, Hyperbolic
Knowledge Cut-off Date
-
12.2024
Open Source
No
Yes
Pricing Input
$2.00 per million tokens
$0.23 per million tokens
Pricing Output
$8.00 per million tokens
$0.40 per million tokens
MMLU
90.2%
pass@1
Source
86%
0-shot, CoT
Source
MMLU Pro
-
68.9%
5-shot, CoT
Source
MMMU
74.8%
Source
Not available
HellaSwag
-
Not available
HumanEval
-
88.4%
pass@1
Source
MATH
-
77%
0-shot, CoT
Source
GPQA
66.3%
Diamond
Source
50.5%
0-shot, CoT
Source
IFEval
-
92.1%
Source
Mobile Application
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.