NVIDIA GeForce RTX 4090

NVIDIA · 24GB GDDR6X · Can run 20 models

Manufacturer NVIDIA
VRAM 24 GB
Memory Type GDDR6X
Architecture Ada Lovelace
CUDA Cores 16,384
Tensor Cores 512
TDP 450W
MSRP $1,599
Released Oct 12, 2022

AI Notes

The RTX 4090 remains one of the best GPUs for local AI inference. Its 24GB of GDDR6X VRAM can run 13B models at full precision and 30B+ models with quantization. The massive tensor core count delivers class-leading inference throughput among consumer GPUs.

Compatible Models

Model Parameters Best Quant VRAM Used Fit
Llama 3.2 1B 1B Q8_0 3 GB Runs
Gemma 2 2B 2B Q8_0 4 GB Runs
Llama 3.2 3B 3B Q8_0 5 GB Runs
Phi-3 Mini 3.8B 3.8B Q8_0 5.8 GB Runs
DeepSeek R1 7B 7B Q8_0 9 GB Runs
Mistral 7B 7B Q8_0 9 GB Runs
Qwen 2.5 7B 7B Q8_0 9 GB Runs
Qwen 2.5 Coder 7B 7B Q8_0 9 GB Runs
Llama 3.1 8B 8B Q8_0 10 GB Runs
Gemma 2 9B 9B Q8_0 11 GB Runs
DeepSeek R1 14B 14B Q4_K_M 9.9 GB Runs
Phi-4 14B 14B Q4_K_M 9.9 GB Runs
Qwen 2.5 14B 14B Q4_K_M 9.9 GB Runs
StarCoder2 15B 15B Q8_0 17 GB Runs
Codestral 22B 22B Q4_K_M 14.7 GB Runs
Gemma 2 27B 27B Q4_K_M 17.7 GB Runs
DeepSeek R1 32B 32B Q4_K_M 20.7 GB Runs (tight)
Qwen 2.5 32B 32B Q4_K_M 20.7 GB Runs (tight)
Command R 35B 35B Q4_K_M 22.5 GB Runs (tight)
Mixtral 8x7B 47B Q4_K_M 29.7 GB CPU Offload
5 model(s) are too large for this hardware.