NVIDIA GeForce RTX 3090
NVIDIA · 24GB GDDR6X · Can run 50 models
Buy Amazon
| Manufacturer | NVIDIA |
| VRAM | 24 GB |
| Memory Type | GDDR6X |
| Architecture | Ampere |
| CUDA Cores | 10,496 |
| Tensor Cores | 328 |
| Bandwidth | 936 GB/s |
| TDP | 350W |
| MSRP | $1,499 |
| Released | Sep 24, 2020 |
AI Notes
The RTX 3090 remains a strong contender for local AI with its generous 24GB of GDDR6X VRAM. It can load 13B models at full precision and run 30B+ models with quantization. Available at steep discounts on the used market, it offers excellent VRAM-per-dollar for AI workloads.
Compatible Models
| Model | Parameters | Best Quant | VRAM Used | Fit | Est. Speed |
|---|---|---|---|---|---|
| Qwen 3 0.6B | 600M | Q4_K_M | 2.5 GB | Runs | ~374 tok/s |
| Gemma 3 1B | 1B | Q8_0 | 2 GB | Runs | ~468 tok/s |
| Llama 3.2 1B | 1B | Q8_0 | 3 GB | Runs | ~312 tok/s |
| DeepSeek R1 1.5B | 1.5B | Q8_0 | 3 GB | Runs | ~312 tok/s |
| Gemma 2 2B | 2B | Q8_0 | 4 GB | Runs | ~234 tok/s |
| Gemma 3n E2B | 2B | Q4_K_M | 3.3 GB | Runs | ~284 tok/s |
| Llama 3.2 3B | 3B | Q8_0 | 5 GB | Runs | ~187 tok/s |
| Phi-3 Mini 3.8B | 3.8B | Q8_0 | 5.8 GB | Runs | ~161 tok/s |
| Phi-4 Mini 3.8B | 3.8B | Q4_K_M | 4.5 GB | Runs | ~208 tok/s |
| Gemma 3 4B | 4B | Q4_K_M | 5 GB | Runs | ~187 tok/s |
| Gemma 3n E4B | 4B | Q4_K_M | 4.5 GB | Runs | ~208 tok/s |
| Qwen 3 4B | 4B | Q4_K_M | 4.5 GB | Runs | ~208 tok/s |
| DeepSeek R1 7B | 7B | Q8_0 | 9 GB | Runs | ~104 tok/s |
| Falcon 3 7B | 7B | Q4_K_M | 6.8 GB | Runs | ~138 tok/s |
| Mistral 7B | 7B | Q8_0 | 9 GB | Runs | ~104 tok/s |
| Qwen 2.5 7B | 7B | Q8_0 | 9 GB | Runs | ~104 tok/s |
| Qwen 2.5 Coder 7B | 7B | Q8_0 | 9 GB | Runs | ~104 tok/s |
| Qwen 2.5 VL 7B | 7B | Q4_K_M | 7 GB | Runs | ~134 tok/s |
| Cogito 8B | 8B | Q4_K_M | 7.5 GB | Runs | ~125 tok/s |
| DeepSeek R1 8B | 8B | Q4_K_M | 7.5 GB | Runs | ~125 tok/s |
| Llama 3.1 8B | 8B | Q8_0 | 10 GB | Runs | ~94 tok/s |
| Nemotron 3 Nano 8B | 8B | Q4_K_M | 7.5 GB | Runs | ~125 tok/s |
| Qwen 3 8B | 8B | Q4_K_M | 7.5 GB | Runs | ~125 tok/s |
| Gemma 2 9B | 9B | Q8_0 | 11 GB | Runs | ~85 tok/s |
| Falcon 3 10B | 10B | Q4_K_M | 8.5 GB | Runs | ~110 tok/s |
| Llama 3.2 Vision 11B | 11B | Q4_K_M | 8.5 GB | Runs | ~110 tok/s |
| Gemma 3 12B | 12B | Q4_K_M | 10.5 GB | Runs | ~89 tok/s |
| Mistral Nemo 12B | 12B | Q4_K_M | 9.5 GB | Runs | ~99 tok/s |
| DeepSeek R1 14B | 14B | Q4_K_M | 9.9 GB | Runs | ~95 tok/s |
| Phi-4 14B | 14B | Q4_K_M | 9.9 GB | Runs | ~95 tok/s |
| Phi-4 Reasoning 14B | 14B | Q4_K_M | 11 GB | Runs | ~85 tok/s |
| Qwen 2.5 14B | 14B | Q4_K_M | 9.9 GB | Runs | ~95 tok/s |
| Qwen 2.5 Coder 14B | 14B | Q4_K_M | 12 GB | Runs | ~78 tok/s |
| Qwen 3 14B | 14B | Q4_K_M | 12 GB | Runs | ~78 tok/s |
| StarCoder2 15B | 15B | Q8_0 | 17 GB | Runs | ~55 tok/s |
| Codestral 22B | 22B | Q4_K_M | 14.7 GB | Runs | ~64 tok/s |
| Devstral 24B | 24B | Q4_K_M | 17 GB | Runs | ~55 tok/s |
| Magistral Small 24B | 24B | Q4_K_M | 17 GB | Runs | ~55 tok/s |
| Mistral Small 3.1 24B | 24B | Q4_K_M | 18 GB | Runs | ~52 tok/s |
| Gemma 2 27B | 27B | Q4_K_M | 17.7 GB | Runs | ~53 tok/s |
| Gemma 3 27B | 27B | Q4_K_M | 20 GB | Runs | ~47 tok/s |
| Qwen 3 30B-A3B (MoE) | 30B | Q4_K_M | 22 GB | Runs (tight) | ~43 tok/s |
| Cogito 32B | 32B | Q4_K_M | 21.5 GB | Runs (tight) | ~44 tok/s |
| DeepSeek R1 32B | 32B | Q4_K_M | 20.7 GB | Runs (tight) | ~45 tok/s |
| Qwen 2.5 32B | 32B | Q4_K_M | 20.7 GB | Runs (tight) | ~45 tok/s |
| QwQ 32B | 32B | Q4_K_M | 21.5 GB | Runs (tight) | ~44 tok/s |
| Command R 35B | 35B | Q4_K_M | 22.5 GB | Runs (tight) | ~42 tok/s |
| Qwen 2.5 Coder 32B | 32B | Q4_K_M | 23 GB | CPU Offload | ~12 tok/s |
| Qwen 3 32B | 32B | Q4_K_M | 23 GB | CPU Offload | ~12 tok/s |
| Mixtral 8x7B | 47B | Q4_K_M | 29.7 GB | CPU Offload | ~10 tok/s |
19
model(s) are too large for this hardware.