Skip to content

NVIDIA GeForce RTX 4090

NVIDIA · 24GB GDDR6X · Can run 50 models

Buy Amazon
Manufacturer NVIDIA
VRAM 24 GB
Memory Type GDDR6X
Architecture Ada Lovelace
CUDA Cores 16,384
Tensor Cores 512
Bandwidth 1008 GB/s
TDP 450W
MSRP $1,599
Released Oct 12, 2022

AI Notes

The RTX 4090 remains one of the best GPUs for local AI inference. Its 24GB of GDDR6X VRAM can run 13B models at full precision and 30B+ models with quantization. The massive tensor core count delivers class-leading inference throughput among consumer GPUs.

Compatible Models

Model Parameters Best Quant VRAM Used Fit Est. Speed
Qwen 3 0.6B 600M Q4_K_M 2.5 GB Runs ~403 tok/s
Gemma 3 1B 1B Q8_0 2 GB Runs ~504 tok/s
Llama 3.2 1B 1B Q8_0 3 GB Runs ~336 tok/s
DeepSeek R1 1.5B 1.5B Q8_0 3 GB Runs ~336 tok/s
Gemma 2 2B 2B Q8_0 4 GB Runs ~252 tok/s
Gemma 3n E2B 2B Q4_K_M 3.3 GB Runs ~305 tok/s
Llama 3.2 3B 3B Q8_0 5 GB Runs ~202 tok/s
Phi-3 Mini 3.8B 3.8B Q8_0 5.8 GB Runs ~174 tok/s
Phi-4 Mini 3.8B 3.8B Q4_K_M 4.5 GB Runs ~224 tok/s
Gemma 3 4B 4B Q4_K_M 5 GB Runs ~202 tok/s
Gemma 3n E4B 4B Q4_K_M 4.5 GB Runs ~224 tok/s
Qwen 3 4B 4B Q4_K_M 4.5 GB Runs ~224 tok/s
DeepSeek R1 7B 7B Q8_0 9 GB Runs ~112 tok/s
Falcon 3 7B 7B Q4_K_M 6.8 GB Runs ~148 tok/s
Mistral 7B 7B Q8_0 9 GB Runs ~112 tok/s
Qwen 2.5 7B 7B Q8_0 9 GB Runs ~112 tok/s
Qwen 2.5 Coder 7B 7B Q8_0 9 GB Runs ~112 tok/s
Qwen 2.5 VL 7B 7B Q4_K_M 7 GB Runs ~144 tok/s
Cogito 8B 8B Q4_K_M 7.5 GB Runs ~134 tok/s
DeepSeek R1 8B 8B Q4_K_M 7.5 GB Runs ~134 tok/s
Llama 3.1 8B 8B Q8_0 10 GB Runs ~101 tok/s
Nemotron 3 Nano 8B 8B Q4_K_M 7.5 GB Runs ~134 tok/s
Qwen 3 8B 8B Q4_K_M 7.5 GB Runs ~134 tok/s
Gemma 2 9B 9B Q8_0 11 GB Runs ~92 tok/s
Falcon 3 10B 10B Q4_K_M 8.5 GB Runs ~119 tok/s
Llama 3.2 Vision 11B 11B Q4_K_M 8.5 GB Runs ~119 tok/s
Gemma 3 12B 12B Q4_K_M 10.5 GB Runs ~96 tok/s
Mistral Nemo 12B 12B Q4_K_M 9.5 GB Runs ~106 tok/s
DeepSeek R1 14B 14B Q4_K_M 9.9 GB Runs ~102 tok/s
Phi-4 14B 14B Q4_K_M 9.9 GB Runs ~102 tok/s
Phi-4 Reasoning 14B 14B Q4_K_M 11 GB Runs ~92 tok/s
Qwen 2.5 14B 14B Q4_K_M 9.9 GB Runs ~102 tok/s
Qwen 2.5 Coder 14B 14B Q4_K_M 12 GB Runs ~84 tok/s
Qwen 3 14B 14B Q4_K_M 12 GB Runs ~84 tok/s
StarCoder2 15B 15B Q8_0 17 GB Runs ~59 tok/s
Codestral 22B 22B Q4_K_M 14.7 GB Runs ~69 tok/s
Devstral 24B 24B Q4_K_M 17 GB Runs ~59 tok/s
Magistral Small 24B 24B Q4_K_M 17 GB Runs ~59 tok/s
Mistral Small 3.1 24B 24B Q4_K_M 18 GB Runs ~56 tok/s
Gemma 2 27B 27B Q4_K_M 17.7 GB Runs ~57 tok/s
Gemma 3 27B 27B Q4_K_M 20 GB Runs ~50 tok/s
Qwen 3 30B-A3B (MoE) 30B Q4_K_M 22 GB Runs (tight) ~46 tok/s
Cogito 32B 32B Q4_K_M 21.5 GB Runs (tight) ~47 tok/s
DeepSeek R1 32B 32B Q4_K_M 20.7 GB Runs (tight) ~49 tok/s
Qwen 2.5 32B 32B Q4_K_M 20.7 GB Runs (tight) ~49 tok/s
QwQ 32B 32B Q4_K_M 21.5 GB Runs (tight) ~47 tok/s
Command R 35B 35B Q4_K_M 22.5 GB Runs (tight) ~45 tok/s
Qwen 2.5 Coder 32B 32B Q4_K_M 23 GB CPU Offload ~13 tok/s
Qwen 3 32B 32B Q4_K_M 23 GB CPU Offload ~13 tok/s
Mixtral 8x7B 47B Q4_K_M 29.7 GB CPU Offload ~10 tok/s
19 model(s) are too large for this hardware.