Llama 3.3 70B Instruct

Llama 3.3 70B Instruct, created by Meta, is a multilingual large language model specifically fine-tuned for instruction-based tasks and optimized for conversational applications. It is capable of processing and generating text in multiple languages, with a context window supporting up to 128,000 tokens. Launched on December 6, 2024, the model surpasses numerous open-source and proprietary chat models in various industry benchmarks. It utilizes Grouped-Query Attention (GQA) to improve scalability and has been trained on a diverse dataset comprising over 15 trillion tokens from publicly available sources. The model's knowledge is current up to December 2023.

Nova Lite

Amazon Nova Lite is a versatile multimodal model designed to process text, image, and video inputs, producing text-based outputs. Featuring a 300K-token context window, it is well-suited for real-time interactions, document analysis, and visual question answering. As part of the Amazon Nova foundation models, it supports fine-tuning and distillation, enabling advanced customization.

Llama 3.3 70B InstructNova Lite
Provider
Web Site
-
Release Date
Dec 06, 2024
4 months ago
Dec 02, 2024
4 months ago
Modalities
text ?
text ?
images ?
video ?
API Providers
Fireworks, Together, DeepInfra, Hyperbolic
Amazon Bedrock
Knowledge Cut-off Date
12.2024
Purposefully not disclosed
Open Source
Yes
No
Pricing Input
$0.23 per million tokens
$0.06 per million tokens
Pricing Output
$0.40 per million tokens
$0.24 per million tokens
MMLU
86%
0-shot, CoT
Source
80.5%
CoT
Source
MMLU Pro
68.9%
5-shot, CoT
Source
Not available
MMMU
Not available
Not available
HellaSwag
Not available
Not available
HumanEval
88.4%
pass@1
Source
85.4%
pass@1
Source
MATH
77%
0-shot, CoT
Source
73.3%
CoT
Source
GPQA
50.5%
0-shot, CoT
Source
42%
Main
Source
IFEval
92.1%
Source
89.7%
Source
Mobile Application
-
-

Compare LLMs

Add a Comment


10%
Our site uses cookies.

Privacy and Cookie Policy: This site uses cookies. By continuing to use the site, you agree to their use.