by DeepSeek· 2 months ago
Unified reasoning and non-reasoning model that merges DeepSeek-V3 and R1 capabilities into a single architecture.
Context Window
128K
Max Output
16K
TTFT
300ms
Speed
90 tok/s
Input Price
$0.28/M tokens
Output Price
$0.42/M tokens
Performance Profile
Frontier-tier performance at $0.28/M input tokens
128K token context window — handles lengthy documents with ease
Supports text + code — true multimodal capability
Fully open source — self-host, fine-tune, and customize without restrictions
vs similar-tier models
| Model | Input | Output | Context | Avg Score |
|---|---|---|---|---|
DeepSeek-V3.2Current DeepSeek | $0.28 | $0.42 | 128K | 86.4 |
GPT-4o OpenAI | $2.50 | $10.00 | 128K | 81.1 |
Kimi K2.5 Moonshot AI | $0.45 | $2.20 | 256K | 92.3 |
Generate a function
<$0.001Spec → implementation with tests
500 in · 300 out
Review a 2,000-line PR
$0.0036Full pull request code review
10,000 in · 2,000 out
Refactor a 5,000-line module
$0.0091Major refactoring with explanations
25,000 in · 5,000 out
Analyze a full codebase
$0.032Architecture analysis + recommendations
100,000 in · 10,000 out
Code generation
$8/mo
$0.27/day
PR reviews
$109/mo
$4/day
Codebase analysis
$521/mo
$17/day
No ratings yet. Be the first to rate this model!
Sign in to rate this model and share your experience.
Sign in to leave a comment and join the discussion.
DeepSeek
Hybrid model combining V3 and R1 strengths. Improved reasoning with RL techniques from R1.
Input
$0.27/M
Output
$1.10/M
Context
128K
DeepSeek
DeepSeek's open-source MoE model rivaling frontier models at a fraction of the cost.
Input
$0.27/M
Output
$1.10/M
Context
128K
DeepSeek
DeepSeek's reasoning model with transparent chain-of-thought. Open-source and highly competitive.
Input
$0.55/M
Output
$2.19/M
Context
128K
OpenAI
OpenAI's most advanced multimodal model. Excels at text, vision, and audio tasks with fast response times.
Input
$2.50/M
Output
$10.00/M
Context
128K
Moonshot AI
Moonshot AI's frontier multimodal MoE model with 1T total parameters (32B active). Tops SWE-bench and AIME 2025 benchmarks.
Input
$0.45/M
Output
$2.20/M
Context
256K
Google's most capable thinking model with breakthrough performance on reasoning and coding.
Input
$1.25/M
Output
$10.00/M
Context
1.0M