Grok 3 Beta

Grok 3 is xAI's most advanced model, trained on the Colossus supercluster with 10 times the computational power of previous state-of-the-art models. It boasts a 1M-token context window and advanced reasoning capabilities, enhanced through large-scale reinforcement learning, enabling deep thought processes ranging from seconds to minutes for solving complex problems. The model achieves top-tier performance across academic benchmarks and real-world user evaluations, earning an Elo score of 1402 in the Chatbot Arena. It was released alongside Grok 3 Mini, a cost-efficient variant optimized for streamlined reasoning.

Qwen2.5-VL-32B

Over the past five months since the release of Qwen2-VL, developers have built new models based on it, contributing valuable feedback. Now, Qwen2.5-VL introduces enhanced capabilities, including precise analysis of images, text, and charts, as well as object localization with structured JSON outputs. It understands long videos, identifies key events, and functions as an agent, interacting with tools on computers and phones. The model's architecture features dynamic video processing and an optimized ViT encoder for improved speed and accuracy.

Grok 3 BetaQwen2.5-VL-32B
Web Site ?
Provider ?
Chat ?
Release Date ?
Modalities ?
text ?
images ?
video ?
text ?
images ?
video ?
API Providers ?
xAI
-
Knowledge Cut-off Date ?
2025-01
Unknown
Open Source ?
No
Yes (Source)
Pricing Input ?
Not available
$0
Pricing Output ?
Not available
$0
MMLU ?
Not available
78.4%
Source
MMLU-Pro ?
79.9%
Base model
Source
49.5%
MMMU ?
78%
With Think mode
Source
70%
HellaSwag ?
Not available
Not available
HumanEval ?
Not available
Not available
MATH ?
Not available
82.2%
GPQA ?
84.6%
With Think mode, Diamond
Source
46.0%
Diamond
IFEval ?
Not available
Not available
SimpleQA ?
-
-
AIME 2024
-
-
AIME 2025
-
-
Aider Polyglot ?
-
-
LiveCodeBench v5 ?
-
-
Global MMLU (Lite) ?
-
-
MathVista ?
-
-
Mobile Application
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.