The OpenAI o3-mini is a high-speed, cost-effective reasoning model designed for STEM applications, with strong performance in science, mathematics, and coding. Launched in January 2025, it includes essential developer features such as function calling, structured outputs, and developer messages. The model offers three reasoning effort levels—low, medium, and high—allowing users to optimize between deeper analysis and faster response times. Unlike the o3 model, it lacks vision capabilities. Initially available to select developers in API usage tiers 3-5, it can be accessed via the Chat Completions API, Assistants API, and Batch API.
Command R+ is Cohere’s cutting-edge generative AI model, engineered for enterprise-grade performance where speed, security, and output quality are critical. Designed to run efficiently with minimal infrastructure, it outperforms top-tier models like GPT-4o and DeepSeek-V3 in both capability and cost-effectiveness. Featuring an extended 256K token context window—twice as large as most leading models—it excels at complex multilingual and agent-based tasks essential for modern business operations. Despite its power, it can be deployed on just two GPUs, making it highly accessible. With blazing-fast throughput of up to 156 tokens per second—about 1.75x faster than GPT-4o—Command R+ delivers exceptional efficiency without compromising accuracy or depth.
o3-mini | Command A | |
---|---|---|
Provider | ||
Web Site | ||
Release Date | Jan 31, 2025 2 months ago | Mar 14, 2025 1 month ago |
Modalities | text | text |
API Providers | OpenAI API | Cohere, Hugging Face, Major cloud providers |
Knowledge Cut-off Date | Unknown | - |
Open Source | No | Yes |
Pricing Input | $1.10 per million tokens | $2.50 per million tokens |
Pricing Output | $4.40 per million tokens | $10.00 per million tokens |
MMLU | 86.9% pass@1, high effort Source | 85.5% Source |
MMLU Pro | Not available | Not available |
MMMU | Not available | Not available |
HellaSwag | Not available | Not available |
HumanEval | Not available | Not available |
MATH | 97.9% pass@1, high effort Source | 80% Source |
GPQA | 79.7% 0-shot, high effort Source | 50.8% Source |
IFEval | Not available | 90.9% Source |
Mobile Application | - |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.