NVIDIA GeForce RTX 2080 Ti

NVIDIA · 11GB GDDR6 · Can run 25 models

Buy Amazon
Manufacturer NVIDIA
VRAM 11 GB
Memory Type GDDR6
Architecture Turing
CUDA Cores 4,352
Bandwidth 616 GB/s
TDP 250W
MSRP $1,199
Released Sep 20, 2018

AI Notes

The RTX 2080 Ti remains a popular choice for budget local AI on the used market. Its 11GB VRAM handles 7B models well and can run 13B with Q4 quantization. The 616 GB/s bandwidth provides fast inference for its VRAM class. An excellent second-hand option for local AI experimentation.

Compatible Models

Model Parameters Best Quant VRAM Used Fit Est. Speed
Gemma 3 1B 1B Q8_0 2 GB Runs ~308 tok/s
Llama 3.2 1B 1B Q8_0 3 GB Runs ~205 tok/s
DeepSeek R1 1.5B 1.5B Q8_0 3 GB Runs ~205 tok/s
Gemma 2 2B 2B Q8_0 4 GB Runs ~154 tok/s
Llama 3.2 3B 3B Q8_0 5 GB Runs ~123 tok/s
Phi-3 Mini 3.8B 3.8B Q8_0 5.8 GB Runs ~106 tok/s
Phi-4 Mini 3.8B 3.8B Q4_K_M 4.5 GB Runs ~137 tok/s
Gemma 3 4B 4B Q4_K_M 5 GB Runs ~123 tok/s
Qwen 3 4B 4B Q4_K_M 4.5 GB Runs ~137 tok/s
DeepSeek R1 7B 7B Q8_0 9 GB Runs ~68 tok/s
Mistral 7B 7B Q8_0 9 GB Runs ~68 tok/s
Qwen 2.5 7B 7B Q8_0 9 GB Runs ~68 tok/s
Qwen 2.5 Coder 7B 7B Q8_0 9 GB Runs ~68 tok/s
DeepSeek R1 8B 8B Q4_K_M 7.5 GB Runs ~82 tok/s
Qwen 3 8B 8B Q4_K_M 7.5 GB Runs ~82 tok/s
Llama 3.1 8B 8B Q8_0 10 GB Runs (tight) ~62 tok/s
Mistral Nemo 12B 12B Q4_K_M 9.5 GB Runs (tight) ~65 tok/s
DeepSeek R1 14B 14B Q4_K_M 9.9 GB Runs (tight) ~62 tok/s
Phi-4 14B 14B Q4_K_M 9.9 GB Runs (tight) ~62 tok/s
Qwen 2.5 14B 14B Q4_K_M 9.9 GB Runs (tight) ~62 tok/s
Gemma 2 9B 9B Q8_0 11 GB CPU Offload ~56 tok/s
Gemma 3 12B 12B Q4_K_M 10.5 GB CPU Offload ~59 tok/s
Qwen 2.5 Coder 14B 14B Q4_K_M 12 GB CPU Offload ~51 tok/s
Qwen 3 14B 14B Q4_K_M 12 GB CPU Offload ~51 tok/s
Codestral 22B 22B Q4_K_M 14.7 GB CPU Offload ~42 tok/s
18 model(s) are too large for this hardware.