The OpenAI o3-mini is a high-speed, cost-effective reasoning model designed for STEM applications, with strong performance in science, mathematics, and coding. Launched in January 2025, it includes essential developer features such as function calling, structured outputs, and developer messages. The model offers three reasoning effort levels—low, medium, and high—allowing users to optimize between deeper analysis and faster response times. Unlike the o3 model, it lacks vision capabilities. Initially available to select developers in API usage tiers 3-5, it can be accessed via the Chat Completions API, Assistants API, and Batch API.
Llama 3.3 70B Instruct, created by Meta, is a multilingual large language model specifically fine-tuned for instruction-based tasks and optimized for conversational applications. It is capable of processing and generating text in multiple languages, with a context window supporting up to 128,000 tokens. Launched on December 6, 2024, the model surpasses numerous open-source and proprietary chat models in various industry benchmarks. It utilizes Grouped-Query Attention (GQA) to improve scalability and has been trained on a diverse dataset comprising over 15 trillion tokens from publicly available sources. The model's knowledge is current up to December 2023.
o3-mini | Llama 3.3 70B Instruct | |
---|---|---|
Provider | ||
Web Site | ||
Release Date | Jan 31, 2025 2 months ago | Dec 06, 2024 4 months ago |
Modalities | text | text |
API Providers | OpenAI API | Fireworks, Together, DeepInfra, Hyperbolic |
Knowledge Cut-off Date | Unknown | 12.2024 |
Open Source | No | Yes |
Pricing Input | $1.10 per million tokens | $0.23 per million tokens |
Pricing Output | $4.40 per million tokens | $0.40 per million tokens |
MMLU | 86.9% pass@1, high effort Source | 86% 0-shot, CoT Source |
MMLU Pro | Not available | 68.9% 5-shot, CoT Source |
MMMU | Not available | Not available |
HellaSwag | Not available | Not available |
HumanEval | Not available | 88.4% pass@1 Source |
MATH | 97.9% pass@1, high effort Source | 77% 0-shot, CoT Source |
GPQA | 79.7% 0-shot, high effort Source | 50.5% 0-shot, CoT Source |
IFEval | Not available | 92.1% Source |
Mobile Application | - |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.