Gemini 2.0 Flash is Google's high-performance, low-latency model designed to drive advanced agentic experiences. Equipped with native tool integration, it supports multimodal inputs, including text, images, video, and audio. Offering substantial improvements over previous versions, the model balances efficiency, speed, and enhanced capabilities for seamless real-time interactions.
Llama 3.3 70B Instruct, created by Meta, is a multilingual large language model specifically fine-tuned for instruction-based tasks and optimized for conversational applications. It is capable of processing and generating text in multiple languages, with a context window supporting up to 128,000 tokens. Launched on December 6, 2024, the model surpasses numerous open-source and proprietary chat models in various industry benchmarks. It utilizes Grouped-Query Attention (GQA) to improve scalability and has been trained on a diverse dataset comprising over 15 trillion tokens from publicly available sources. The model's knowledge is current up to December 2023.
Gemini 2.0 Flash | Llama 3.3 70B Instruct | |
---|---|---|
Provider | ||
Web Site | ||
Release Date | Dec 11, 2024 4 months ago | Dec 06, 2024 4 months ago |
Modalities | text images voice video | text |
API Providers | Google AI Studio, Vertex AI | Fireworks, Together, DeepInfra, Hyperbolic |
Knowledge Cut-off Date | 08.2024 | 12.2024 |
Open Source | No | Yes |
Pricing Input | $0.10 per million tokens | $0.23 per million tokens |
Pricing Output | $0.40 per million tokens | $0.40 per million tokens |
MMLU | Not available | 86% 0-shot, CoT Source |
MMLU Pro | 77.6% Source | 68.9% 5-shot, CoT Source |
MMMU | 71.7% Source | Not available |
HellaSwag | Not available | Not available |
HumanEval | Not available | 88.4% pass@1 Source |
MATH | 90.9% Source | 77% 0-shot, CoT Source |
GPQA | 60.1% Diamond Source | 50.5% 0-shot, CoT Source |
IFEval | Not available | 92.1% Source |
Mobile Application | - |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.