Grok 3 Beta

Grok 3 is xAI's most advanced model, trained on the Colossus supercluster with 10 times the computational power of previous state-of-the-art models. It boasts a 1M-token context window and advanced reasoning capabilities, enhanced through large-scale reinforcement learning, enabling deep thought processes ranging from seconds to minutes for solving complex problems. The model achieves top-tier performance across academic benchmarks and real-world user evaluations, earning an Elo score of 1402 in the Chatbot Arena. It was released alongside Grok 3 Mini, a cost-efficient variant optimized for streamlined reasoning.

Claude 3.5 Haiku

Claude 3.5 Haiku, developed by Anthropic, offers a 200,000-token context window. Pricing is set at $1 per million input tokens and $5 per million output tokens, with potential savings of up to 90% through prompt caching and 50% via the Message Batches API. Released on November 4, 2024, this model excels in code completion, interactive chatbots, data extraction and labeling, as well as real-time content moderation.

Grok 3 BetaClaude 3.5 Haiku
Web Site ?
Provider ?
Chat ?
Release Date ?
Modalities ?
text ?
images ?
video ?
text ?
API Providers ?
xAI
Anthropic, AWS Bedrock, Vertex AI
Knowledge Cut-off Date ?
2025-01
01.04.2024
Open Source ?
No
No
Pricing Input ?
Not available
$0.80 per million tokens
Pricing Output ?
Not available
$4.00
MMLU ?
Not available
Not available
MMLU-Pro ?
79.9%
Base model
Source
65%
0-shot CoT
Source
MMMU ?
78%
With Think mode
Source
Not available
HellaSwag ?
Not available
Not available
HumanEval ?
Not available
88.1%
0-shot
Source
MATH ?
Not available
69.4%
0-shot CoT
Source
GPQA ?
84.6%
With Think mode, Diamond
Source
Not available
IFEval ?
Not available
Not available
SimpleQA ?
-
-
AIME 2024
-
-
AIME 2025
-
-
Aider Polyglot ?
-
-
LiveCodeBench v5 ?
-
-
Global MMLU (Lite) ?
-
-
MathVista ?
-
-
Mobile Application

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.