Llama 3.3 70B Instruct

Llama 3.3 70B Instruct, created by Meta, is a multilingual large language model specifically fine-tuned for instruction-based tasks and optimized for conversational applications. It is capable of processing and generating text in multiple languages, with a context window supporting up to 128,000 tokens. Launched on December 6, 2024, the model surpasses numerous open-source and proprietary chat models in various industry benchmarks. It utilizes Grouped-Query Attention (GQA) to improve scalability and has been trained on a diverse dataset comprising over 15 trillion tokens from publicly available sources. The model's knowledge is current up to December 2023.

Command A

Command R+ is Cohere’s cutting-edge generative AI model, engineered for enterprise-grade performance where speed, security, and output quality are critical. Designed to run efficiently with minimal infrastructure, it outperforms top-tier models like GPT-4o and DeepSeek-V3 in both capability and cost-effectiveness. Featuring an extended 256K token context window—twice as large as most leading models—it excels at complex multilingual and agent-based tasks essential for modern business operations. Despite its power, it can be deployed on just two GPUs, making it highly accessible. With blazing-fast throughput of up to 156 tokens per second—about 1.75x faster than GPT-4o—Command R+ delivers exceptional efficiency without compromising accuracy or depth.

Llama 3.3 70B InstructCommand A
Web Site ?
Provider ?
Chat ?
Release Date ?
Modalities ?
text ?
text ?
API Providers ?
Fireworks, Together, DeepInfra, Hyperbolic
Cohere, Hugging Face, Major cloud providers
Knowledge Cut-off Date ?
12.2024
-
Open Source ?
Yes
Yes
Pricing Input ?
$0.23 per million tokens
$2.50 per million tokens
Pricing Output ?
$0.40 per million tokens
$10.00 per million tokens
MMLU ?
86%
0-shot, CoT
Source
85.5%
Source
MMLU-Pro ?
68.9%
5-shot, CoT
Source
Not available
MMMU ?
Not available
Not available
HellaSwag ?
Not available
Not available
HumanEval ?
88.4%
pass@1
Source
Not available
MATH ?
77%
0-shot, CoT
Source
80%
Source
GPQA ?
50.5%
0-shot, CoT
Source
50.8%
Source
IFEval ?
92.1%
Source
90.9%
Source
SimpleQA ?
-
-
AIME 2024
-
-
AIME 2025
-
-
Aider Polyglot ?
-
-
LiveCodeBench v5 ?
-
-
Global MMLU (Lite) ?
-
-
MathVista ?
-
-
Mobile Application
-
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.