NVIDIA GeForce RTX 5070 Ti

NVIDIA · 16GB GDDR7 · Can run 36 models

Buy Amazon
Manufacturer NVIDIA
VRAM 16 GB
Memory Type GDDR7
Architecture Blackwell
CUDA Cores 8,960
Bandwidth 896 GB/s
TDP 300W
MSRP $749
Released Mar 10, 2025

AI Notes

The RTX 5070 Ti offers an impressive 896 GB/s bandwidth with 16GB GDDR7 VRAM. It runs 13B models at very fast speeds and can handle 30B models with quantization. The combination of high bandwidth and 16GB capacity makes it one of the best cards for local AI at its price point.

Compatible Models

Model Parameters Best Quant VRAM Used Fit Est. Speed
Gemma 3 1B 1B Q8_0 2 GB Runs ~448 tok/s
Llama 3.2 1B 1B Q8_0 3 GB Runs ~299 tok/s
DeepSeek R1 1.5B 1.5B Q8_0 3 GB Runs ~299 tok/s
Gemma 2 2B 2B Q8_0 4 GB Runs ~224 tok/s
Llama 3.2 3B 3B Q8_0 5 GB Runs ~179 tok/s
Phi-3 Mini 3.8B 3.8B Q8_0 5.8 GB Runs ~154 tok/s
Phi-4 Mini 3.8B 3.8B Q4_K_M 4.5 GB Runs ~199 tok/s
Gemma 3 4B 4B Q4_K_M 5 GB Runs ~179 tok/s
Qwen 3 4B 4B Q4_K_M 4.5 GB Runs ~199 tok/s
DeepSeek R1 7B 7B Q8_0 9 GB Runs ~100 tok/s
Mistral 7B 7B Q8_0 9 GB Runs ~100 tok/s
Qwen 2.5 7B 7B Q8_0 9 GB Runs ~100 tok/s
Qwen 2.5 Coder 7B 7B Q8_0 9 GB Runs ~100 tok/s
DeepSeek R1 8B 8B Q4_K_M 7.5 GB Runs ~119 tok/s
Llama 3.1 8B 8B Q8_0 10 GB Runs ~90 tok/s
Qwen 3 8B 8B Q4_K_M 7.5 GB Runs ~119 tok/s
Gemma 2 9B 9B Q8_0 11 GB Runs ~81 tok/s
Gemma 3 12B 12B Q4_K_M 10.5 GB Runs ~85 tok/s
Mistral Nemo 12B 12B Q4_K_M 9.5 GB Runs ~94 tok/s
DeepSeek R1 14B 14B Q4_K_M 9.9 GB Runs ~91 tok/s
Phi-4 14B 14B Q4_K_M 9.9 GB Runs ~91 tok/s
Qwen 2.5 14B 14B Q4_K_M 9.9 GB Runs ~91 tok/s
Qwen 2.5 Coder 14B 14B Q4_K_M 12 GB Runs ~75 tok/s
Qwen 3 14B 14B Q4_K_M 12 GB Runs ~75 tok/s
Codestral 22B 22B Q4_K_M 14.7 GB Runs (tight) ~61 tok/s
StarCoder2 15B 15B Q8_0 17 GB CPU Offload ~53 tok/s
Devstral 24B 24B Q4_K_M 17 GB CPU Offload ~53 tok/s
Mistral Small 3.1 24B 24B Q4_K_M 18 GB CPU Offload ~50 tok/s
Gemma 2 27B 27B Q4_K_M 17.7 GB CPU Offload ~51 tok/s
Gemma 3 27B 27B Q4_K_M 20 GB CPU Offload ~45 tok/s
Qwen 3 30B-A3B (MoE) 30B Q4_K_M 22 GB CPU Offload ~41 tok/s
DeepSeek R1 32B 32B Q4_K_M 20.7 GB CPU Offload ~43 tok/s
Qwen 2.5 32B 32B Q4_K_M 20.7 GB CPU Offload ~43 tok/s
Qwen 2.5 Coder 32B 32B Q4_K_M 23 GB CPU Offload ~39 tok/s
Qwen 3 32B 32B Q4_K_M 23 GB CPU Offload ~39 tok/s
Command R 35B 35B Q4_K_M 22.5 GB CPU Offload ~40 tok/s
7 model(s) are too large for this hardware.