NVIDIA GeForce RTX 4060 Ti 8GB
NVIDIA · 8GB GDDR6 · Can run 35 models
Buy Amazon
| Manufacturer | NVIDIA |
| VRAM | 8 GB |
| Memory Type | GDDR6 |
| Architecture | Ada Lovelace |
| CUDA Cores | 4,352 |
| Tensor Cores | 136 |
| Bandwidth | 288 GB/s |
| TDP | 160W |
| MSRP | $399 |
| Released | May 24, 2023 |
AI Notes
The RTX 4060 Ti 8GB is limited for AI workloads due to its 8GB VRAM constraint. It can run 7B-parameter models with quantization but struggles with anything larger. For AI-focused use, the 16GB variant is strongly recommended over this model.
Compatible Models
| Model | Parameters | Best Quant | VRAM Used | Fit | Est. Speed |
|---|---|---|---|---|---|
| Qwen 3 0.6B | 600M | Q4_K_M | 2.5 GB | Runs | ~115 tok/s |
| Gemma 3 1B | 1B | Q8_0 | 2 GB | Runs | ~144 tok/s |
| Llama 3.2 1B | 1B | Q8_0 | 3 GB | Runs | ~96 tok/s |
| DeepSeek R1 1.5B | 1.5B | Q8_0 | 3 GB | Runs | ~96 tok/s |
| Gemma 2 2B | 2B | Q8_0 | 4 GB | Runs | ~72 tok/s |
| Gemma 3n E2B | 2B | Q4_K_M | 3.3 GB | Runs | ~87 tok/s |
| Llama 3.2 3B | 3B | Q8_0 | 5 GB | Runs | ~58 tok/s |
| Phi-3 Mini 3.8B | 3.8B | Q8_0 | 5.8 GB | Runs | ~50 tok/s |
| Phi-4 Mini 3.8B | 3.8B | Q4_K_M | 4.5 GB | Runs | ~64 tok/s |
| Gemma 3 4B | 4B | Q4_K_M | 5 GB | Runs | ~58 tok/s |
| Gemma 3n E4B | 4B | Q4_K_M | 4.5 GB | Runs | ~64 tok/s |
| Qwen 3 4B | 4B | Q4_K_M | 4.5 GB | Runs | ~64 tok/s |
| Falcon 3 7B | 7B | Q4_K_M | 6.8 GB | Runs | ~42 tok/s |
| Qwen 2.5 VL 7B | 7B | Q4_K_M | 7 GB | Runs (tight) | ~41 tok/s |
| Cogito 8B | 8B | Q4_K_M | 7.5 GB | Runs (tight) | ~38 tok/s |
| DeepSeek R1 8B | 8B | Q4_K_M | 7.5 GB | Runs (tight) | ~38 tok/s |
| Nemotron 3 Nano 8B | 8B | Q4_K_M | 7.5 GB | Runs (tight) | ~38 tok/s |
| Qwen 3 8B | 8B | Q4_K_M | 7.5 GB | Runs (tight) | ~38 tok/s |
| DeepSeek R1 7B | 7B | Q8_0 | 9 GB | CPU Offload | ~10 tok/s |
| Mistral 7B | 7B | Q8_0 | 9 GB | CPU Offload | ~10 tok/s |
| Qwen 2.5 7B | 7B | Q8_0 | 9 GB | CPU Offload | ~10 tok/s |
| Qwen 2.5 Coder 7B | 7B | Q8_0 | 9 GB | CPU Offload | ~10 tok/s |
| Llama 3.1 8B | 8B | Q8_0 | 10 GB | CPU Offload | ~9 tok/s |
| Gemma 2 9B | 9B | Q8_0 | 11 GB | CPU Offload | ~8 tok/s |
| Falcon 3 10B | 10B | Q4_K_M | 8.5 GB | CPU Offload | ~10 tok/s |
| Llama 3.2 Vision 11B | 11B | Q4_K_M | 8.5 GB | CPU Offload | ~10 tok/s |
| Gemma 3 12B | 12B | Q4_K_M | 10.5 GB | CPU Offload | ~8 tok/s |
| Mistral Nemo 12B | 12B | Q4_K_M | 9.5 GB | CPU Offload | ~9 tok/s |
| DeepSeek R1 14B | 14B | Q4_K_M | 9.9 GB | CPU Offload | ~9 tok/s |
| Phi-4 14B | 14B | Q4_K_M | 9.9 GB | CPU Offload | ~9 tok/s |
| Phi-4 Reasoning 14B | 14B | Q4_K_M | 11 GB | CPU Offload | ~8 tok/s |
| Qwen 2.5 14B | 14B | Q4_K_M | 9.9 GB | CPU Offload | ~9 tok/s |
| Qwen 2.5 Coder 14B | 14B | Q4_K_M | 12 GB | CPU Offload | ~7 tok/s |
| Qwen 3 14B | 14B | Q4_K_M | 12 GB | CPU Offload | ~7 tok/s |
| StarCoder2 15B | 15B | Q4_K_M | 10.5 GB | CPU Offload | ~8 tok/s |
34
model(s) are too large for this hardware.