Skip to content

Codestral 22B vs Qwen 2.5 Coder 14B

Comparing VRAM requirements, performance, and capabilities for running these models locally with Ollama.

Parameters

22B

Context

32K

VRAM Range

14.7–24 GB

Recommended

Q4_K_M (14.7 GB)

By Mistral AI · License Mistral AI Non-Production License
Parameters

14B

Context

128K

VRAM Range

12–33 GB

Recommended

Q4_K_M (12 GB)

By Alibaba · License Apache 2.0

VRAM Requirements by Quantization

Side-by-side memory needs at each quality level.

Quantization Codestral 22B Qwen 2.5 Coder 14B Difference
Q4_K_M 14.7 GB 12 GB +2.7 GB
Q8_0 24 GB 19 GB +5.0 GB
F16 33 GB

Capabilities

Feature support comparison.

Capability Codestral 22B Qwen 2.5 Coder 14B
text generation Yes Yes
code generation Yes Yes

Benchmark Scores

Higher is better. Scores from published evaluations.

Benchmark Codestral 22B Qwen 2.5 Coder 14B
mmlu 60.0 72.0

Hardware Compatibility

Can each model run at recommended quantization on common VRAM tiers?

VRAM Codestral 22B Qwen 2.5 Coder 14B
8 GB No Offload
12 GB Offload Offload
16 GB Tight Runs
24 GB Runs Runs
32 GB Runs Runs
48 GB Runs Runs
64 GB Runs Runs
96 GB Runs Runs

Run Codestral 22B

ollama run 22b-v0.1-q4_K_M

Run Qwen 2.5 Coder 14B

ollama run 14b-instruct-q4_K_M

Check your exact hardware

Use the compatibility checker to see how each model performs on your specific GPU or Mac.

Related Comparisons