Skip to content

NVIDIA RTX 5000 Ada Generation

NVIDIA · 32GB GDDR6 ECC · Can run 56 models

Buy Amazon
Manufacturer NVIDIA
VRAM 32 GB
Memory Type GDDR6 ECC
Architecture Ada Lovelace
CUDA Cores 12,800
Tensor Cores 400
Bandwidth 722 GB/s
TDP 250W
MSRP $4,000
Released Aug 8, 2023

AI Notes

The RTX 5000 Ada is a professional workstation GPU with 32GB of ECC VRAM. It can run most 30B-parameter models at full precision and 70B models with Q4 quantization. The Ada Lovelace tensor cores provide reliable inference performance for production AI workflows that require ECC memory stability.

Compatible Models

Model Parameters Best Quant VRAM Used Fit Est. Speed
Qwen 3 0.6B 600M Q4_K_M 2.5 GB Runs ~289 tok/s
Gemma 3 1B 1B Q8_0 2 GB Runs ~361 tok/s
Llama 3.2 1B 1B Q8_0 3 GB Runs ~241 tok/s
DeepSeek R1 1.5B 1.5B Q8_0 3 GB Runs ~241 tok/s
Gemma 2 2B 2B Q8_0 4 GB Runs ~181 tok/s
Gemma 3n E2B 2B Q4_K_M 3.3 GB Runs ~219 tok/s
Llama 3.2 3B 3B Q8_0 5 GB Runs ~144 tok/s
Phi-3 Mini 3.8B 3.8B Q8_0 5.8 GB Runs ~124 tok/s
Phi-4 Mini 3.8B 3.8B Q4_K_M 4.5 GB Runs ~160 tok/s
Gemma 3 4B 4B Q4_K_M 5 GB Runs ~144 tok/s
Gemma 3n E4B 4B Q4_K_M 4.5 GB Runs ~160 tok/s
Qwen 3 4B 4B Q4_K_M 4.5 GB Runs ~160 tok/s
DeepSeek R1 7B 7B Q8_0 9 GB Runs ~80 tok/s
Falcon 3 7B 7B Q4_K_M 6.8 GB Runs ~106 tok/s
Mistral 7B 7B Q8_0 9 GB Runs ~80 tok/s
Qwen 2.5 7B 7B Q8_0 9 GB Runs ~80 tok/s
Qwen 2.5 Coder 7B 7B Q8_0 9 GB Runs ~80 tok/s
Qwen 2.5 VL 7B 7B Q4_K_M 7 GB Runs ~103 tok/s
Cogito 8B 8B Q4_K_M 7.5 GB Runs ~96 tok/s
DeepSeek R1 8B 8B Q4_K_M 7.5 GB Runs ~96 tok/s
Llama 3.1 8B 8B Q8_0 10 GB Runs ~72 tok/s
Nemotron 3 Nano 8B 8B Q4_K_M 7.5 GB Runs ~96 tok/s
Qwen 3 8B 8B Q4_K_M 7.5 GB Runs ~96 tok/s
Gemma 2 9B 9B Q8_0 11 GB Runs ~66 tok/s
Falcon 3 10B 10B Q4_K_M 8.5 GB Runs ~85 tok/s
Llama 3.2 Vision 11B 11B Q4_K_M 8.5 GB Runs ~85 tok/s
Gemma 3 12B 12B Q4_K_M 10.5 GB Runs ~69 tok/s
Mistral Nemo 12B 12B Q4_K_M 9.5 GB Runs ~76 tok/s
DeepSeek R1 14B 14B Q4_K_M 9.9 GB Runs ~73 tok/s
Phi-4 14B 14B Q4_K_M 9.9 GB Runs ~73 tok/s
Phi-4 Reasoning 14B 14B Q4_K_M 11 GB Runs ~66 tok/s
Qwen 2.5 14B 14B Q4_K_M 9.9 GB Runs ~73 tok/s
Qwen 2.5 Coder 14B 14B Q4_K_M 12 GB Runs ~60 tok/s
Qwen 3 14B 14B Q4_K_M 12 GB Runs ~60 tok/s
StarCoder2 15B 15B Q8_0 17 GB Runs ~42 tok/s
Codestral 22B 22B Q4_K_M 14.7 GB Runs ~49 tok/s
Devstral 24B 24B Q4_K_M 17 GB Runs ~42 tok/s
Magistral Small 24B 24B Q4_K_M 17 GB Runs ~42 tok/s
Mistral Small 3.1 24B 24B Q4_K_M 18 GB Runs ~40 tok/s
Gemma 2 27B 27B Q4_K_M 17.7 GB Runs ~41 tok/s
Gemma 3 27B 27B Q4_K_M 20 GB Runs ~36 tok/s
Qwen 3 30B-A3B (MoE) 30B Q4_K_M 22 GB Runs ~33 tok/s
Cogito 32B 32B Q4_K_M 21.5 GB Runs ~34 tok/s
DeepSeek R1 32B 32B Q4_K_M 20.7 GB Runs ~35 tok/s
Qwen 2.5 32B 32B Q4_K_M 20.7 GB Runs ~35 tok/s
Qwen 2.5 Coder 32B 32B Q4_K_M 23 GB Runs ~31 tok/s
Qwen 3 32B 32B Q4_K_M 23 GB Runs ~31 tok/s
QwQ 32B 32B Q4_K_M 21.5 GB Runs ~34 tok/s
Command R 35B 35B Q4_K_M 22.5 GB Runs ~32 tok/s
Mixtral 8x7B 47B Q4_K_M 29.7 GB Runs (tight) ~24 tok/s
Cogito 70B 70B Q4_K_M 43 GB CPU Offload ~5 tok/s
DeepSeek R1 70B 70B Q4_K_M 43.5 GB CPU Offload ~5 tok/s
Llama 3.1 70B 70B Q4_K_M 43.5 GB CPU Offload ~5 tok/s
Llama 3.3 70B 70B Q4_K_M 43.5 GB CPU Offload ~5 tok/s
Qwen 2.5 72B 72B Q4_K_M 44.7 GB CPU Offload ~5 tok/s
Qwen 2.5 VL 72B 72B Q4_K_M 41 GB CPU Offload ~5 tok/s
13 model(s) are too large for this hardware.