NVIDIA RTX PRO 5000 Blackwell
NVIDIA · 48GB GDDR7 · Can run 62 models
Buy Amazon
| Manufacturer | NVIDIA |
| VRAM | 48 GB |
| Memory Type | GDDR7 |
| Architecture | Blackwell |
| CUDA Cores | 14,080 |
| Tensor Cores | 440 |
| Bandwidth | 960 GB/s |
| TDP | 300W |
| MSRP | $3,250 |
| Released | Mar 18, 2025 |
AI Notes
The RTX PRO 5000 Blackwell is a professional workstation GPU with 48GB of GDDR7 VRAM. It can run 30B-parameter models at high quantizations and 70B models at Q4, making it excellent for serious local AI inference work. The high bandwidth and Blackwell tensor cores ensure fast token generation across a wide range of model sizes.
Compatible Models
| Model | Parameters | Best Quant | VRAM Used | Fit | Est. Speed |
|---|---|---|---|---|---|
| Qwen 3 0.6B | 600M | Q4_K_M | 2.5 GB | Runs | ~384 tok/s |
| Gemma 3 1B | 1B | Q8_0 | 2 GB | Runs | ~480 tok/s |
| Llama 3.2 1B | 1B | Q8_0 | 3 GB | Runs | ~320 tok/s |
| DeepSeek R1 1.5B | 1.5B | Q8_0 | 3 GB | Runs | ~320 tok/s |
| Gemma 2 2B | 2B | Q8_0 | 4 GB | Runs | ~240 tok/s |
| Gemma 3n E2B | 2B | Q4_K_M | 3.3 GB | Runs | ~291 tok/s |
| Llama 3.2 3B | 3B | Q8_0 | 5 GB | Runs | ~192 tok/s |
| Phi-3 Mini 3.8B | 3.8B | Q8_0 | 5.8 GB | Runs | ~166 tok/s |
| Phi-4 Mini 3.8B | 3.8B | Q4_K_M | 4.5 GB | Runs | ~213 tok/s |
| Gemma 3 4B | 4B | Q4_K_M | 5 GB | Runs | ~192 tok/s |
| Gemma 3n E4B | 4B | Q4_K_M | 4.5 GB | Runs | ~213 tok/s |
| Qwen 3 4B | 4B | Q4_K_M | 4.5 GB | Runs | ~213 tok/s |
| DeepSeek R1 7B | 7B | Q8_0 | 9 GB | Runs | ~107 tok/s |
| Falcon 3 7B | 7B | Q4_K_M | 6.8 GB | Runs | ~141 tok/s |
| Mistral 7B | 7B | Q8_0 | 9 GB | Runs | ~107 tok/s |
| Qwen 2.5 7B | 7B | Q8_0 | 9 GB | Runs | ~107 tok/s |
| Qwen 2.5 Coder 7B | 7B | Q8_0 | 9 GB | Runs | ~107 tok/s |
| Qwen 2.5 VL 7B | 7B | Q4_K_M | 7 GB | Runs | ~137 tok/s |
| Cogito 8B | 8B | Q4_K_M | 7.5 GB | Runs | ~128 tok/s |
| DeepSeek R1 8B | 8B | Q4_K_M | 7.5 GB | Runs | ~128 tok/s |
| Llama 3.1 8B | 8B | Q8_0 | 10 GB | Runs | ~96 tok/s |
| Nemotron 3 Nano 8B | 8B | Q4_K_M | 7.5 GB | Runs | ~128 tok/s |
| Qwen 3 8B | 8B | Q4_K_M | 7.5 GB | Runs | ~128 tok/s |
| Gemma 2 9B | 9B | Q8_0 | 11 GB | Runs | ~87 tok/s |
| Falcon 3 10B | 10B | Q4_K_M | 8.5 GB | Runs | ~113 tok/s |
| Llama 3.2 Vision 11B | 11B | Q4_K_M | 8.5 GB | Runs | ~113 tok/s |
| Gemma 3 12B | 12B | Q4_K_M | 10.5 GB | Runs | ~91 tok/s |
| Mistral Nemo 12B | 12B | Q4_K_M | 9.5 GB | Runs | ~101 tok/s |
| DeepSeek R1 14B | 14B | Q4_K_M | 9.9 GB | Runs | ~97 tok/s |
| Phi-4 14B | 14B | Q4_K_M | 9.9 GB | Runs | ~97 tok/s |
| Phi-4 Reasoning 14B | 14B | Q4_K_M | 11 GB | Runs | ~87 tok/s |
| Qwen 2.5 14B | 14B | Q4_K_M | 9.9 GB | Runs | ~97 tok/s |
| Qwen 2.5 Coder 14B | 14B | Q4_K_M | 12 GB | Runs | ~80 tok/s |
| Qwen 3 14B | 14B | Q4_K_M | 12 GB | Runs | ~80 tok/s |
| StarCoder2 15B | 15B | Q8_0 | 17 GB | Runs | ~56 tok/s |
| Codestral 22B | 22B | Q4_K_M | 14.7 GB | Runs | ~65 tok/s |
| Devstral 24B | 24B | Q4_K_M | 17 GB | Runs | ~56 tok/s |
| Magistral Small 24B | 24B | Q4_K_M | 17 GB | Runs | ~56 tok/s |
| Mistral Small 3.1 24B | 24B | Q4_K_M | 18 GB | Runs | ~53 tok/s |
| Gemma 2 27B | 27B | Q4_K_M | 17.7 GB | Runs | ~54 tok/s |
| Gemma 3 27B | 27B | Q4_K_M | 20 GB | Runs | ~48 tok/s |
| Qwen 3 30B-A3B (MoE) | 30B | Q4_K_M | 22 GB | Runs | ~44 tok/s |
| Cogito 32B | 32B | Q4_K_M | 21.5 GB | Runs | ~45 tok/s |
| DeepSeek R1 32B | 32B | Q4_K_M | 20.7 GB | Runs | ~46 tok/s |
| Qwen 2.5 32B | 32B | Q4_K_M | 20.7 GB | Runs | ~46 tok/s |
| Qwen 2.5 Coder 32B | 32B | Q4_K_M | 23 GB | Runs | ~42 tok/s |
| Qwen 3 32B | 32B | Q4_K_M | 23 GB | Runs | ~42 tok/s |
| QwQ 32B | 32B | Q4_K_M | 21.5 GB | Runs | ~45 tok/s |
| Command R 35B | 35B | Q4_K_M | 22.5 GB | Runs | ~43 tok/s |
| Mixtral 8x7B | 47B | Q4_K_M | 29.7 GB | Runs | ~32 tok/s |
| Cogito 70B | 70B | Q4_K_M | 43 GB | Runs (tight) | ~22 tok/s |
| DeepSeek R1 70B | 70B | Q4_K_M | 43.5 GB | Runs (tight) | ~22 tok/s |
| Llama 3.1 70B | 70B | Q4_K_M | 43.5 GB | Runs (tight) | ~22 tok/s |
| Llama 3.3 70B | 70B | Q4_K_M | 43.5 GB | Runs (tight) | ~22 tok/s |
| Qwen 2.5 72B | 72B | Q4_K_M | 44.7 GB | Runs (tight) | ~21 tok/s |
| Qwen 2.5 VL 72B | 72B | Q4_K_M | 41 GB | Runs (tight) | ~23 tok/s |
| Llama 3.2 Vision 90B | 90B | Q4_K_M | 50 GB | CPU Offload | ~6 tok/s |
| Command R+ 104B | 104B | Q4_K_M | 57 GB | CPU Offload | ~5 tok/s |
| Llama 4 Scout (109B/17B active) | 109B | Q4_K_M | 72 GB | CPU Offload | ~4 tok/s |
| Command A 111B | 111B | Q4_K_M | 61 GB | CPU Offload | ~5 tok/s |
| Devstral 2 123B | 123B | Q4_K_M | 67 GB | CPU Offload | ~4 tok/s |
| Mistral Large 2 123B | 123B | Q4_K_M | 67 GB | CPU Offload | ~4 tok/s |
7
model(s) are too large for this hardware.