DeepSeek-R1 is a 671B parameter Mixture-of-Experts (MoE) model with 37B activated parameters per token, trained via large-scale reinforcement learning with a focus on reasoning capabilities. It incorporates two RL stages for discovering improved reasoning patterns and aligning with human preferences, along with two SFT stages for seeding reasoning and non-reasoning capabilities. The model achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
Amazon Nova Micro is a text-only model optimized for cost and speed. With a context window of 128K tokens, it excels at tasks like text summarization, translation, interactive chat, and basic coding. Released as part of the Amazon Nova foundation models, it supports fine-tuning and distillation for customization on proprietary data.
DeepSeek-R1 | Nova Micro | |
---|---|---|
Provider | ||
Web Site | ||
Release Date | Jan 21, 2025 3 months ago | Dec 02, 2024 4 months ago |
Modalities | text | text |
API Providers | DeepSeek, HuggingFace | Amazon Bedrock |
Knowledge Cut-off Date | Unknown | Purposefully not disclosed |
Open Source | Yes | No |
Pricing Input | $0.55 per million tokens | $0.04 per million tokens |
Pricing Output | $2.19 per million tokens | $0.14 per million tokens |
MMLU | 90.8% Pass@1 Source | 77.6% CoT Source |
MMLU Pro | 84% EM Source | - |
MMMU | - | - |
HellaSwag | - | - |
HumanEval | - | 81.1% pass@1 Source |
MATH | - | 69.3% CoT Source |
GPQA | 71.5% Pass@1 Source | 40% Main Source |
IFEval | 83.3% Prompt Strict Source | 87.2% Source |
Mobile Application | - |
Compare AI. Test. Benchmarks. Mobile Apps Chatbots, Sketch
Copyright © 2025 All Right Reserved.