DeepSeek R1 32B vs QwQ 32B
Comparing VRAM requirements, performance, and capabilities for running these models locally with Ollama.
Parameters
32B
Context
128K
VRAM Range
20.7–34 GB
Recommended
Q4_K_M (20.7 GB)
By DeepSeek · License MIT
Parameters
32B
Context
128K
VRAM Range
21.5–68 GB
Recommended
Q4_K_M (21.5 GB)
By Alibaba · License Apache 2.0
VRAM Requirements by Quantization
Side-by-side memory needs at each quality level.
| Quantization | DeepSeek R1 32B | QwQ 32B | Difference |
|---|---|---|---|
| Q4_K_M | 20.7 GB | 21.5 GB | -0.8 GB |
| Q8_0 | 34 GB | 37 GB | -3.0 GB |
| F16 | — | 68 GB | — |
Capabilities
Feature support comparison.
| Capability | DeepSeek R1 32B | QwQ 32B |
|---|---|---|
| text generation | Yes | Yes |
| code generation | Yes | Yes |
| reasoning | Yes | Yes |
| math | Yes | Yes |
| creative writing | Yes | — |
| multilingual | — | Yes |
Benchmark Scores
Higher is better. Scores from published evaluations.
| Benchmark | DeepSeek R1 32B | QwQ 32B |
|---|---|---|
| mmlu | 83.2 | 82.5 |
Hardware Compatibility
Can each model run at recommended quantization on common VRAM tiers?
| VRAM | DeepSeek R1 32B | QwQ 32B |
|---|---|---|
| 8 GB | No | No |
| 12 GB | No | No |
| 16 GB | Offload | Offload |
| 24 GB | Tight | Tight |
| 32 GB | Runs | Runs |
| 48 GB | Runs | Runs |
| 64 GB | Runs | Runs |
| 96 GB | Runs | Runs |
Run DeepSeek R1 32B
ollama run 32b-q4_K_M Run QwQ 32B
ollama run q4_K_M Check your exact hardware
Use the compatibility checker to see how each model performs on your specific GPU or Mac.