Intel Arc B570
Intel · 10GB GDDR6 · Can run 36 models
Buy Amazon
| Manufacturer | Intel |
| VRAM | 10 GB |
| Memory Type | GDDR6 |
| Architecture | Battlemage |
| Bandwidth | 380 GB/s |
| TDP | 150W |
| MSRP | $219 |
| Released | Jan 16, 2025 |
AI Notes
The Intel Arc B570 provides 10GB of VRAM at one of the lowest price points available, sufficient to run 7B models and attempt 13B with aggressive quantization. Intel's Ollama and AI framework support is still experimental, so setup may require extra effort. A solid budget entry point for users willing to navigate early-adopter challenges.
Compatible Models
| Model | Parameters | Best Quant | VRAM Used | Fit | Est. Speed |
|---|---|---|---|---|---|
| Qwen 3 0.6B | 600M | Q4_K_M | 2.5 GB | Runs | ~152 tok/s |
| Gemma 3 1B | 1B | Q8_0 | 2 GB | Runs | ~190 tok/s |
| Llama 3.2 1B | 1B | Q8_0 | 3 GB | Runs | ~127 tok/s |
| DeepSeek R1 1.5B | 1.5B | Q8_0 | 3 GB | Runs | ~127 tok/s |
| Gemma 2 2B | 2B | Q8_0 | 4 GB | Runs | ~95 tok/s |
| Gemma 3n E2B | 2B | Q4_K_M | 3.3 GB | Runs | ~115 tok/s |
| Llama 3.2 3B | 3B | Q8_0 | 5 GB | Runs | ~76 tok/s |
| Phi-3 Mini 3.8B | 3.8B | Q8_0 | 5.8 GB | Runs | ~66 tok/s |
| Phi-4 Mini 3.8B | 3.8B | Q4_K_M | 4.5 GB | Runs | ~84 tok/s |
| Gemma 3 4B | 4B | Q4_K_M | 5 GB | Runs | ~76 tok/s |
| Gemma 3n E4B | 4B | Q4_K_M | 4.5 GB | Runs | ~84 tok/s |
| Qwen 3 4B | 4B | Q4_K_M | 4.5 GB | Runs | ~84 tok/s |
| Falcon 3 7B | 7B | Q4_K_M | 6.8 GB | Runs | ~56 tok/s |
| Qwen 2.5 VL 7B | 7B | Q4_K_M | 7 GB | Runs | ~54 tok/s |
| Cogito 8B | 8B | Q4_K_M | 7.5 GB | Runs | ~51 tok/s |
| DeepSeek R1 8B | 8B | Q4_K_M | 7.5 GB | Runs | ~51 tok/s |
| Nemotron 3 Nano 8B | 8B | Q4_K_M | 7.5 GB | Runs | ~51 tok/s |
| Qwen 3 8B | 8B | Q4_K_M | 7.5 GB | Runs | ~51 tok/s |
| Falcon 3 10B | 10B | Q4_K_M | 8.5 GB | Runs | ~45 tok/s |
| Llama 3.2 Vision 11B | 11B | Q4_K_M | 8.5 GB | Runs | ~45 tok/s |
| DeepSeek R1 7B | 7B | Q8_0 | 9 GB | Runs (tight) | ~42 tok/s |
| Mistral 7B | 7B | Q8_0 | 9 GB | Runs (tight) | ~42 tok/s |
| Qwen 2.5 7B | 7B | Q8_0 | 9 GB | Runs (tight) | ~42 tok/s |
| Qwen 2.5 Coder 7B | 7B | Q8_0 | 9 GB | Runs (tight) | ~42 tok/s |
| Mistral Nemo 12B | 12B | Q4_K_M | 9.5 GB | Runs (tight) | ~40 tok/s |
| Llama 3.1 8B | 8B | Q8_0 | 10 GB | CPU Offload | ~11 tok/s |
| Gemma 2 9B | 9B | Q8_0 | 11 GB | CPU Offload | ~11 tok/s |
| Gemma 3 12B | 12B | Q4_K_M | 10.5 GB | CPU Offload | ~11 tok/s |
| DeepSeek R1 14B | 14B | Q4_K_M | 9.9 GB | CPU Offload | ~11 tok/s |
| Phi-4 14B | 14B | Q4_K_M | 9.9 GB | CPU Offload | ~11 tok/s |
| Phi-4 Reasoning 14B | 14B | Q4_K_M | 11 GB | CPU Offload | ~11 tok/s |
| Qwen 2.5 14B | 14B | Q4_K_M | 9.9 GB | CPU Offload | ~11 tok/s |
| Qwen 2.5 Coder 14B | 14B | Q4_K_M | 12 GB | CPU Offload | ~10 tok/s |
| Qwen 3 14B | 14B | Q4_K_M | 12 GB | CPU Offload | ~10 tok/s |
| StarCoder2 15B | 15B | Q4_K_M | 10.5 GB | CPU Offload | ~11 tok/s |
| Codestral 22B | 22B | Q4_K_M | 14.7 GB | CPU Offload | ~8 tok/s |
33
model(s) are too large for this hardware.