Gemini 2.0 Flash Thinking is an advanced reasoning model designed to enhance performance and explainability by making its thought process visible. It excels in complex problem-solving, coding challenges, and mathematical reasoning, demonstrating step-by-step solutions. Optimized for tasks that demand detailed explanations and logical analysis, the model also features native tool integration, including code execution and Google Search capabilities.
Mistral Large 2, developed by Mistral, offers a 128K-token context window and is priced at $3.00 per million input tokens and $9.00 per million output tokens. Released on July 24, 2024, the model scored 84.0 on the MMLU benchmark in a 5-shot evaluation, demonstrating strong performance in diverse tasks.
Gemini 2.0 Flash Thinking | Mistral Large 2 | |
---|---|---|
Web Site
| ||
Provider
| ||
Chat
| ||
Release Date
| ||
Modalities
| text images | text |
API Providers
| Google AI Studio, Vertex AI, Gemini API | Azure AI, AWS Bedrock, Google AI Studio, Vertex AI, Snowflake Cortex |
Knowledge Cut-off Date
| 04.2024 | Unknown |
Open Source
| No | Yes |
Pricing Input
| Not available | $3.00 per million tokens |
Pricing Output
| Not available | $9.00 per million tokens |
MMLU
| Not available | 84% 5-shot Source |
MMLU-Pro
| Not available | 50.69% Source |
MMMU
| 75.4% Source | Not available |
HellaSwag
| Not available | Not available |
HumanEval
| Not available | Not available |
MATH
| Not available | 1.13% Source |
GPQA
| 74.2% Diamond Science Source | 24.94% |
IFEval
| Not available | 84.01% |
SimpleQA
| - | - |
AIME 2024 | - | - |
AIME 2025 | - | - |
Aider Polyglot
| - | - |
LiveCodeBench v5
| - | - |
Global MMLU (Lite)
| - | - |
MathVista
| - | - |
Mobile Application | - |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.