Claude 3.5 Haiku

Claude 3.5 Haiku, developed by Anthropic, offers a 200,000-token context window. Pricing is set at $1 per million input tokens and $5 per million output tokens, with potential savings of up to 90% through prompt caching and 50% via the Message Batches API. Released on November 4, 2024, this model excels in code completion, interactive chatbots, data extraction and labeling, as well as real-time content moderation.

Llama 3.3 70B Instruct

Llama 3.3 70B Instruct, created by Meta, is a multilingual large language model specifically fine-tuned for instruction-based tasks and optimized for conversational applications. It is capable of processing and generating text in multiple languages, with a context window supporting up to 128,000 tokens. Launched on December 6, 2024, the model surpasses numerous open-source and proprietary chat models in various industry benchmarks. It utilizes Grouped-Query Attention (GQA) to improve scalability and has been trained on a diverse dataset comprising over 15 trillion tokens from publicly available sources. The model's knowledge is current up to December 2023.

Claude 3.5 HaikuLlama 3.3 70B Instruct
Web Site ?
Provider ?
Chat ?
Release Date ?
Modalities ?
text ?
text ?
API Providers ?
Anthropic, AWS Bedrock, Vertex AI
Fireworks, Together, DeepInfra, Hyperbolic
Knowledge Cut-off Date ?
01.04.2024
12.2024
Open Source ?
No
Yes
Pricing Input ?
$0.80 per million tokens
$0.23 per million tokens
Pricing Output ?
$4.00
$0.40 per million tokens
MMLU ?
Not available
86%
0-shot, CoT
Source
MMLU-Pro ?
65%
0-shot CoT
Source
68.9%
5-shot, CoT
Source
MMMU ?
Not available
Not available
HellaSwag ?
Not available
Not available
HumanEval ?
88.1%
0-shot
Source
88.4%
pass@1
Source
MATH ?
69.4%
0-shot CoT
Source
77%
0-shot, CoT
Source
GPQA ?
Not available
50.5%
0-shot, CoT
Source
IFEval ?
Not available
92.1%
Source
SimpleQA ?
-
-
AIME 2024
-
-
AIME 2025
-
-
Aider Polyglot ?
-
-
LiveCodeBench v5 ?
-
-
Global MMLU (Lite) ?
-
-
MathVista ?
-
-
Mobile Application
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.