Skip to content

AMD Radeon RX 9060 XT 8GB

AMD · 8GB GDDR6 · Can run 35 models

Buy Amazon
Manufacturer AMD
VRAM 8 GB
Memory Type GDDR6
Architecture RDNA 4
Stream Procs 2,048
Bandwidth 269 GB/s
TDP 150W
MSRP $279
Released Jun 5, 2025

AI Notes

The RX 9060 XT 8GB is a budget-friendly RDNA 4 card with enough VRAM to handle 7B models comfortably at Q4 quantization. The halved memory bandwidth compared to the 16GB variant limits throughput on larger workloads. ROCm support continues to improve for RDNA 4, but NVIDIA remains easier to set up for AI inference.

Compatible Models

Model Parameters Best Quant VRAM Used Fit Est. Speed
Qwen 3 0.6B 600M Q4_K_M 2.5 GB Runs ~108 tok/s
Gemma 3 1B 1B Q8_0 2 GB Runs ~135 tok/s
Llama 3.2 1B 1B Q8_0 3 GB Runs ~90 tok/s
DeepSeek R1 1.5B 1.5B Q8_0 3 GB Runs ~90 tok/s
Gemma 2 2B 2B Q8_0 4 GB Runs ~67 tok/s
Gemma 3n E2B 2B Q4_K_M 3.3 GB Runs ~82 tok/s
Llama 3.2 3B 3B Q8_0 5 GB Runs ~54 tok/s
Phi-3 Mini 3.8B 3.8B Q8_0 5.8 GB Runs ~46 tok/s
Phi-4 Mini 3.8B 3.8B Q4_K_M 4.5 GB Runs ~60 tok/s
Gemma 3 4B 4B Q4_K_M 5 GB Runs ~54 tok/s
Gemma 3n E4B 4B Q4_K_M 4.5 GB Runs ~60 tok/s
Qwen 3 4B 4B Q4_K_M 4.5 GB Runs ~60 tok/s
Falcon 3 7B 7B Q4_K_M 6.8 GB Runs ~40 tok/s
Qwen 2.5 VL 7B 7B Q4_K_M 7 GB Runs (tight) ~38 tok/s
Cogito 8B 8B Q4_K_M 7.5 GB Runs (tight) ~36 tok/s
DeepSeek R1 8B 8B Q4_K_M 7.5 GB Runs (tight) ~36 tok/s
Nemotron 3 Nano 8B 8B Q4_K_M 7.5 GB Runs (tight) ~36 tok/s
Qwen 3 8B 8B Q4_K_M 7.5 GB Runs (tight) ~36 tok/s
DeepSeek R1 7B 7B Q8_0 9 GB CPU Offload ~9 tok/s
Mistral 7B 7B Q8_0 9 GB CPU Offload ~9 tok/s
Qwen 2.5 7B 7B Q8_0 9 GB CPU Offload ~9 tok/s
Qwen 2.5 Coder 7B 7B Q8_0 9 GB CPU Offload ~9 tok/s
Llama 3.1 8B 8B Q8_0 10 GB CPU Offload ~8 tok/s
Gemma 2 9B 9B Q8_0 11 GB CPU Offload ~7 tok/s
Falcon 3 10B 10B Q4_K_M 8.5 GB CPU Offload ~10 tok/s
Llama 3.2 Vision 11B 11B Q4_K_M 8.5 GB CPU Offload ~10 tok/s
Gemma 3 12B 12B Q4_K_M 10.5 GB CPU Offload ~8 tok/s
Mistral Nemo 12B 12B Q4_K_M 9.5 GB CPU Offload ~8 tok/s
DeepSeek R1 14B 14B Q4_K_M 9.9 GB CPU Offload ~8 tok/s
Phi-4 14B 14B Q4_K_M 9.9 GB CPU Offload ~8 tok/s
Phi-4 Reasoning 14B 14B Q4_K_M 11 GB CPU Offload ~7 tok/s
Qwen 2.5 14B 14B Q4_K_M 9.9 GB CPU Offload ~8 tok/s
Qwen 2.5 Coder 14B 14B Q4_K_M 12 GB CPU Offload ~7 tok/s
Qwen 3 14B 14B Q4_K_M 12 GB CPU Offload ~7 tok/s
StarCoder2 15B 15B Q4_K_M 10.5 GB CPU Offload ~8 tok/s
34 model(s) are too large for this hardware.