NVIDIA RTX A6000
NVIDIA · 48GB GDDR6 · Can run 62 models
Buy Amazon
| Manufacturer | NVIDIA |
| VRAM | 48 GB |
| Memory Type | GDDR6 |
| Architecture | Ampere |
| CUDA Cores | 10,752 |
| Tensor Cores | 336 |
| Bandwidth | 768 GB/s |
| TDP | 300W |
| MSRP | $4,650 |
| Released | Dec 3, 2020 |
AI Notes
The RTX A6000 is a workstation GPU with a massive 48GB of GDDR6 VRAM — the most of any single Ampere GPU. It can load 30B models at full precision and run 70B models with quantization. While expensive new, used prices have dropped significantly, making it one of the best options for running large models on a single GPU.
Compatible Models
| Model | Parameters | Best Quant | VRAM Used | Fit | Est. Speed |
|---|---|---|---|---|---|
| Qwen 3 0.6B | 600M | Q4_K_M | 2.5 GB | Runs | ~307 tok/s |
| Gemma 3 1B | 1B | Q8_0 | 2 GB | Runs | ~384 tok/s |
| Llama 3.2 1B | 1B | Q8_0 | 3 GB | Runs | ~256 tok/s |
| DeepSeek R1 1.5B | 1.5B | Q8_0 | 3 GB | Runs | ~256 tok/s |
| Gemma 2 2B | 2B | Q8_0 | 4 GB | Runs | ~192 tok/s |
| Gemma 3n E2B | 2B | Q4_K_M | 3.3 GB | Runs | ~233 tok/s |
| Llama 3.2 3B | 3B | Q8_0 | 5 GB | Runs | ~154 tok/s |
| Phi-3 Mini 3.8B | 3.8B | Q8_0 | 5.8 GB | Runs | ~132 tok/s |
| Phi-4 Mini 3.8B | 3.8B | Q4_K_M | 4.5 GB | Runs | ~171 tok/s |
| Gemma 3 4B | 4B | Q4_K_M | 5 GB | Runs | ~154 tok/s |
| Gemma 3n E4B | 4B | Q4_K_M | 4.5 GB | Runs | ~171 tok/s |
| Qwen 3 4B | 4B | Q4_K_M | 4.5 GB | Runs | ~171 tok/s |
| DeepSeek R1 7B | 7B | Q8_0 | 9 GB | Runs | ~85 tok/s |
| Falcon 3 7B | 7B | Q4_K_M | 6.8 GB | Runs | ~113 tok/s |
| Mistral 7B | 7B | Q8_0 | 9 GB | Runs | ~85 tok/s |
| Qwen 2.5 7B | 7B | Q8_0 | 9 GB | Runs | ~85 tok/s |
| Qwen 2.5 Coder 7B | 7B | Q8_0 | 9 GB | Runs | ~85 tok/s |
| Qwen 2.5 VL 7B | 7B | Q4_K_M | 7 GB | Runs | ~110 tok/s |
| Cogito 8B | 8B | Q4_K_M | 7.5 GB | Runs | ~102 tok/s |
| DeepSeek R1 8B | 8B | Q4_K_M | 7.5 GB | Runs | ~102 tok/s |
| Llama 3.1 8B | 8B | Q8_0 | 10 GB | Runs | ~77 tok/s |
| Nemotron 3 Nano 8B | 8B | Q4_K_M | 7.5 GB | Runs | ~102 tok/s |
| Qwen 3 8B | 8B | Q4_K_M | 7.5 GB | Runs | ~102 tok/s |
| Gemma 2 9B | 9B | Q8_0 | 11 GB | Runs | ~70 tok/s |
| Falcon 3 10B | 10B | Q4_K_M | 8.5 GB | Runs | ~90 tok/s |
| Llama 3.2 Vision 11B | 11B | Q4_K_M | 8.5 GB | Runs | ~90 tok/s |
| Gemma 3 12B | 12B | Q4_K_M | 10.5 GB | Runs | ~73 tok/s |
| Mistral Nemo 12B | 12B | Q4_K_M | 9.5 GB | Runs | ~81 tok/s |
| DeepSeek R1 14B | 14B | Q4_K_M | 9.9 GB | Runs | ~78 tok/s |
| Phi-4 14B | 14B | Q4_K_M | 9.9 GB | Runs | ~78 tok/s |
| Phi-4 Reasoning 14B | 14B | Q4_K_M | 11 GB | Runs | ~70 tok/s |
| Qwen 2.5 14B | 14B | Q4_K_M | 9.9 GB | Runs | ~78 tok/s |
| Qwen 2.5 Coder 14B | 14B | Q4_K_M | 12 GB | Runs | ~64 tok/s |
| Qwen 3 14B | 14B | Q4_K_M | 12 GB | Runs | ~64 tok/s |
| StarCoder2 15B | 15B | Q8_0 | 17 GB | Runs | ~45 tok/s |
| Codestral 22B | 22B | Q4_K_M | 14.7 GB | Runs | ~52 tok/s |
| Devstral 24B | 24B | Q4_K_M | 17 GB | Runs | ~45 tok/s |
| Magistral Small 24B | 24B | Q4_K_M | 17 GB | Runs | ~45 tok/s |
| Mistral Small 3.1 24B | 24B | Q4_K_M | 18 GB | Runs | ~43 tok/s |
| Gemma 2 27B | 27B | Q4_K_M | 17.7 GB | Runs | ~43 tok/s |
| Gemma 3 27B | 27B | Q4_K_M | 20 GB | Runs | ~38 tok/s |
| Qwen 3 30B-A3B (MoE) | 30B | Q4_K_M | 22 GB | Runs | ~35 tok/s |
| Cogito 32B | 32B | Q4_K_M | 21.5 GB | Runs | ~36 tok/s |
| DeepSeek R1 32B | 32B | Q4_K_M | 20.7 GB | Runs | ~37 tok/s |
| Qwen 2.5 32B | 32B | Q4_K_M | 20.7 GB | Runs | ~37 tok/s |
| Qwen 2.5 Coder 32B | 32B | Q4_K_M | 23 GB | Runs | ~33 tok/s |
| Qwen 3 32B | 32B | Q4_K_M | 23 GB | Runs | ~33 tok/s |
| QwQ 32B | 32B | Q4_K_M | 21.5 GB | Runs | ~36 tok/s |
| Command R 35B | 35B | Q4_K_M | 22.5 GB | Runs | ~34 tok/s |
| Mixtral 8x7B | 47B | Q4_K_M | 29.7 GB | Runs | ~26 tok/s |
| Cogito 70B | 70B | Q4_K_M | 43 GB | Runs (tight) | ~18 tok/s |
| DeepSeek R1 70B | 70B | Q4_K_M | 43.5 GB | Runs (tight) | ~18 tok/s |
| Llama 3.1 70B | 70B | Q4_K_M | 43.5 GB | Runs (tight) | ~18 tok/s |
| Llama 3.3 70B | 70B | Q4_K_M | 43.5 GB | Runs (tight) | ~18 tok/s |
| Qwen 2.5 72B | 72B | Q4_K_M | 44.7 GB | Runs (tight) | ~17 tok/s |
| Qwen 2.5 VL 72B | 72B | Q4_K_M | 41 GB | Runs (tight) | ~19 tok/s |
| Llama 3.2 Vision 90B | 90B | Q4_K_M | 50 GB | CPU Offload | ~5 tok/s |
| Command R+ 104B | 104B | Q4_K_M | 57 GB | CPU Offload | ~4 tok/s |
| Llama 4 Scout (109B/17B active) | 109B | Q4_K_M | 72 GB | CPU Offload | ~3 tok/s |
| Command A 111B | 111B | Q4_K_M | 61 GB | CPU Offload | ~4 tok/s |
| Devstral 2 123B | 123B | Q4_K_M | 67 GB | CPU Offload | ~3 tok/s |
| Mistral Large 2 123B | 123B | Q4_K_M | 67 GB | CPU Offload | ~3 tok/s |
7
model(s) are too large for this hardware.