Qwen2.5-VL-32B

Over the past five months since the release of Qwen2-VL, developers have built new models based on it, contributing valuable feedback. Now, Qwen2.5-VL introduces enhanced capabilities, including precise analysis of images, text, and charts, as well as object localization with structured JSON outputs. It understands long videos, identifies key events, and functions as an agent, interacting with tools on computers and phones. The model's architecture features dynamic video processing and an optimized ViT encoder for improved speed and accuracy.

Command A

Command R+ is Cohere’s cutting-edge generative AI model, engineered for enterprise-grade performance where speed, security, and output quality are critical. Designed to run efficiently with minimal infrastructure, it outperforms top-tier models like GPT-4o and DeepSeek-V3 in both capability and cost-effectiveness. Featuring an extended 256K token context window—twice as large as most leading models—it excels at complex multilingual and agent-based tasks essential for modern business operations. Despite its power, it can be deployed on just two GPUs, making it highly accessible. With blazing-fast throughput of up to 156 tokens per second—about 1.75x faster than GPT-4o—Command R+ delivers exceptional efficiency without compromising accuracy or depth.

Qwen2.5-VL-32BCommand A
Provider
Web Site
Release Date
Mar 25, 2025
4 weeks ago
Mar 14, 2025
1 month ago
Modalities
text ?
images ?
video ?
text ?
API Providers
-
Cohere, Hugging Face, Major cloud providers
Knowledge Cut-off Date
Unknown
-
Open Source
Yes (Source)
Yes
Pricing Input
$0
$2.50 per million tokens
Pricing Output
$0
$10.00 per million tokens
MMLU
78.4%
Source
85.5%
Source
MMLU Pro
49.5%
Not available
MMMU
70%
Not available
HellaSwag
Not available
Not available
HumanEval
Not available
Not available
MATH
82.2%
80%
Source
GPQA
46.0%
Diamond
50.8%
Source
IFEval
Not available
90.9%
Source
Mobile Application
-
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.