AMD Radeon RX 7900 XTX
AMD · 24GB GDDR6 · Can run 20 models
| Manufacturer | AMD |
| VRAM | 24 GB |
| Memory Type | GDDR6 |
| Architecture | RDNA 3 |
| Stream Processors | 6,144 |
| TDP | 355W |
| MSRP | $999 |
| Released | Dec 13, 2022 |
AI Notes
The RX 7900 XTX is AMD's flagship with 24GB of GDDR6 VRAM, making it competitive with the RTX 4090 for model loading capacity. It can run 13B models at full precision and 30B+ models with quantization. ROCm support for Ollama is maturing, though NVIDIA CUDA still offers broader compatibility.
Compatible Models
| Model | Parameters | Best Quant | VRAM Used | Fit |
|---|---|---|---|---|
| Llama 3.2 1B | 1B | Q8_0 | 3 GB | Runs |
| Gemma 2 2B | 2B | Q8_0 | 4 GB | Runs |
| Llama 3.2 3B | 3B | Q8_0 | 5 GB | Runs |
| Phi-3 Mini 3.8B | 3.8B | Q8_0 | 5.8 GB | Runs |
| DeepSeek R1 7B | 7B | Q8_0 | 9 GB | Runs |
| Mistral 7B | 7B | Q8_0 | 9 GB | Runs |
| Qwen 2.5 7B | 7B | Q8_0 | 9 GB | Runs |
| Qwen 2.5 Coder 7B | 7B | Q8_0 | 9 GB | Runs |
| Llama 3.1 8B | 8B | Q8_0 | 10 GB | Runs |
| Gemma 2 9B | 9B | Q8_0 | 11 GB | Runs |
| DeepSeek R1 14B | 14B | Q4_K_M | 9.9 GB | Runs |
| Phi-4 14B | 14B | Q4_K_M | 9.9 GB | Runs |
| Qwen 2.5 14B | 14B | Q4_K_M | 9.9 GB | Runs |
| StarCoder2 15B | 15B | Q8_0 | 17 GB | Runs |
| Codestral 22B | 22B | Q4_K_M | 14.7 GB | Runs |
| Gemma 2 27B | 27B | Q4_K_M | 17.7 GB | Runs |
| DeepSeek R1 32B | 32B | Q4_K_M | 20.7 GB | Runs (tight) |
| Qwen 2.5 32B | 32B | Q4_K_M | 20.7 GB | Runs (tight) |
| Command R 35B | 35B | Q4_K_M | 22.5 GB | Runs (tight) |
| Mixtral 8x7B | 47B | Q4_K_M | 29.7 GB | CPU Offload |
5
model(s) are too large for this hardware.