Llama 4 Scout

LLaMA 4 Scout is a 17-billion parameter model leveraging a Mixture-of-Experts architecture with 16 active experts, positioning it as the top multimodal model in its category. It consistently outperforms competitors like Gemma 3, Gemini 2.0 Flash-Lite, and Mistral 3.1 across diverse benchmark tasks. Despite its performance, LLaMA 4 Scout is remarkably efficient—capable of running on a single NVIDIA H100 GPU with Int4 quantization. It also boasts an industry-leading 10 million token context window and is natively multimodal, seamlessly processing text, images, and video inputs for advanced real-world applications.

Qwen2.5-VL-32B

Over the past five months since the release of Qwen2-VL, developers have built new models based on it, contributing valuable feedback. Now, Qwen2.5-VL introduces enhanced capabilities, including precise analysis of images, text, and charts, as well as object localization with structured JSON outputs. It understands long videos, identifies key events, and functions as an agent, interacting with tools on computers and phones. The model's architecture features dynamic video processing and an optimized ViT encoder for improved speed and accuracy.

Llama 4 ScoutQwen2.5-VL-32B
Provider
Web Site
Release Date
Apr 05, 2025
2 weeks ago
Mar 25, 2025
4 weeks ago
Modalities
text ?
images ?
video ?
text ?
images ?
video ?
API Providers
Meta AI, Hugging Face, Fireworks, Together, DeepInfra
-
Knowledge Cut-off Date
2025-04
Unknown
Open Source
Yes (Source)
Yes (Source)
Pricing Input
Not available
$0
Pricing Output
Not available
$0
MMLU
Not available
78.4%
Source
MMLU Pro
74.3%
Reasoning & Knowledge
Source
49.5%
MMMU
69.4%
Image Reasoning
Source
70%
HellaSwag
Not available
Not available
HumanEval
Not available
Not available
MATH
Not available
82.2%
GPQA
57.2%
Diamond
Source
46.0%
Diamond
IFEval
Not available
Not available
Mobile Application
-
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.