Skip to content

Qwen 2.5 Coder 32B vs DeepSeek R1 32B

Comparing VRAM requirements, performance, and capabilities for running these models locally with Ollama.

Parameters

32B

Context

128K

VRAM Range

23–70 GB

Recommended

Q4_K_M (23 GB)

By Alibaba · License Apache 2.0
Parameters

32B

Context

128K

VRAM Range

20.7–34 GB

Recommended

Q4_K_M (20.7 GB)

By DeepSeek · License MIT

VRAM Requirements by Quantization

Side-by-side memory needs at each quality level.

Quantization Qwen 2.5 Coder 32B DeepSeek R1 32B Difference
Q4_K_M 23 GB 20.7 GB +2.3 GB
Q8_0 39 GB 34 GB +5.0 GB
F16 70 GB

Capabilities

Feature support comparison.

Capability Qwen 2.5 Coder 32B DeepSeek R1 32B
text generation Yes Yes
code generation Yes Yes
reasoning Yes Yes
math Yes
creative writing Yes

Benchmark Scores

Higher is better. Scores from published evaluations.

Benchmark Qwen 2.5 Coder 32B DeepSeek R1 32B
mmlu 78.0 83.2

Hardware Compatibility

Can each model run at recommended quantization on common VRAM tiers?

VRAM Qwen 2.5 Coder 32B DeepSeek R1 32B
8 GB No No
12 GB No No
16 GB Offload Offload
24 GB Offload Tight
32 GB Runs Runs
48 GB Runs Runs
64 GB Runs Runs
96 GB Runs Runs

Run Qwen 2.5 Coder 32B

ollama run 32b-instruct-q4_K_M

Run DeepSeek R1 32B

ollama run 32b-q4_K_M

Check your exact hardware

Use the compatibility checker to see how each model performs on your specific GPU or Mac.

Related Comparisons