NVIDIA GeForce RTX 5090

NVIDIA · 32GB GDDR7 · Can run 24 models

Manufacturer NVIDIA
VRAM 32 GB
Memory Type GDDR7
Architecture Blackwell
CUDA Cores 21,760
Tensor Cores 680
TDP 575W
MSRP $1,999
Released Jan 30, 2025

AI Notes

The RTX 5090 is the ultimate consumer GPU for local AI. With 32GB of GDDR7 VRAM, it can run most 30B-parameter models at full precision and 70B models with quantization (Q4). The massive CUDA and tensor core count delivers exceptional inference throughput for real-time AI workloads.

Compatible Models

Model Parameters Best Quant VRAM Used Fit
Llama 3.2 1B 1B Q8_0 3 GB Runs
Gemma 2 2B 2B Q8_0 4 GB Runs
Llama 3.2 3B 3B Q8_0 5 GB Runs
Phi-3 Mini 3.8B 3.8B Q8_0 5.8 GB Runs
DeepSeek R1 7B 7B Q8_0 9 GB Runs
Mistral 7B 7B Q8_0 9 GB Runs
Qwen 2.5 7B 7B Q8_0 9 GB Runs
Qwen 2.5 Coder 7B 7B Q8_0 9 GB Runs
Llama 3.1 8B 8B Q8_0 10 GB Runs
Gemma 2 9B 9B Q8_0 11 GB Runs
DeepSeek R1 14B 14B Q4_K_M 9.9 GB Runs
Phi-4 14B 14B Q4_K_M 9.9 GB Runs
Qwen 2.5 14B 14B Q4_K_M 9.9 GB Runs
StarCoder2 15B 15B Q8_0 17 GB Runs
Codestral 22B 22B Q4_K_M 14.7 GB Runs
Gemma 2 27B 27B Q4_K_M 17.7 GB Runs
DeepSeek R1 32B 32B Q4_K_M 20.7 GB Runs
Qwen 2.5 32B 32B Q4_K_M 20.7 GB Runs
Command R 35B 35B Q4_K_M 22.5 GB Runs
Mixtral 8x7B 47B Q4_K_M 29.7 GB Runs (tight)
DeepSeek R1 70B 70B Q4_K_M 43.5 GB CPU Offload
Llama 3.1 70B 70B Q4_K_M 43.5 GB CPU Offload
Llama 3.3 70B 70B Q4_K_M 43.5 GB CPU Offload
Qwen 2.5 72B 72B Q4_K_M 44.7 GB CPU Offload
1 model(s) are too large for this hardware.