Skip to content

Intel Arc B580

Intel · 12GB GDDR6 · Can run 40 models

Buy Amazon
Manufacturer Intel
VRAM 12 GB
Memory Type GDDR6
Architecture Battlemage
Bandwidth 456 GB/s
TDP 190W
MSRP $249
Released Dec 13, 2024

AI Notes

The Intel Arc B580 offers 12GB of VRAM at a very competitive price point, making it capable of running 7B models comfortably and 13B models with tight quantization. Intel GPU support in Ollama is still maturing, so expect some driver and compatibility work during setup. For the price, it provides an impressive amount of VRAM for local AI experimentation.

Compatible Models

Model Parameters Best Quant VRAM Used Fit Est. Speed
Qwen 3 0.6B 600M Q4_K_M 2.5 GB Runs ~182 tok/s
Gemma 3 1B 1B Q8_0 2 GB Runs ~228 tok/s
Llama 3.2 1B 1B Q8_0 3 GB Runs ~152 tok/s
DeepSeek R1 1.5B 1.5B Q8_0 3 GB Runs ~152 tok/s
Gemma 2 2B 2B Q8_0 4 GB Runs ~114 tok/s
Gemma 3n E2B 2B Q4_K_M 3.3 GB Runs ~138 tok/s
Llama 3.2 3B 3B Q8_0 5 GB Runs ~91 tok/s
Phi-3 Mini 3.8B 3.8B Q8_0 5.8 GB Runs ~79 tok/s
Phi-4 Mini 3.8B 3.8B Q4_K_M 4.5 GB Runs ~101 tok/s
Gemma 3 4B 4B Q4_K_M 5 GB Runs ~91 tok/s
Gemma 3n E4B 4B Q4_K_M 4.5 GB Runs ~101 tok/s
Qwen 3 4B 4B Q4_K_M 4.5 GB Runs ~101 tok/s
DeepSeek R1 7B 7B Q8_0 9 GB Runs ~51 tok/s
Falcon 3 7B 7B Q4_K_M 6.8 GB Runs ~67 tok/s
Mistral 7B 7B Q8_0 9 GB Runs ~51 tok/s
Qwen 2.5 7B 7B Q8_0 9 GB Runs ~51 tok/s
Qwen 2.5 Coder 7B 7B Q8_0 9 GB Runs ~51 tok/s
Qwen 2.5 VL 7B 7B Q4_K_M 7 GB Runs ~65 tok/s
Cogito 8B 8B Q4_K_M 7.5 GB Runs ~61 tok/s
DeepSeek R1 8B 8B Q4_K_M 7.5 GB Runs ~61 tok/s
Llama 3.1 8B 8B Q8_0 10 GB Runs ~46 tok/s
Nemotron 3 Nano 8B 8B Q4_K_M 7.5 GB Runs ~61 tok/s
Qwen 3 8B 8B Q4_K_M 7.5 GB Runs ~61 tok/s
Falcon 3 10B 10B Q4_K_M 8.5 GB Runs ~54 tok/s
Llama 3.2 Vision 11B 11B Q4_K_M 8.5 GB Runs ~54 tok/s
Mistral Nemo 12B 12B Q4_K_M 9.5 GB Runs ~48 tok/s
DeepSeek R1 14B 14B Q4_K_M 9.9 GB Runs ~46 tok/s
Phi-4 14B 14B Q4_K_M 9.9 GB Runs ~46 tok/s
Qwen 2.5 14B 14B Q4_K_M 9.9 GB Runs ~46 tok/s
Gemma 2 9B 9B Q8_0 11 GB Runs (tight) ~41 tok/s
Gemma 3 12B 12B Q4_K_M 10.5 GB Runs (tight) ~43 tok/s
Phi-4 Reasoning 14B 14B Q4_K_M 11 GB Runs (tight) ~41 tok/s
Qwen 2.5 Coder 14B 14B Q4_K_M 12 GB CPU Offload ~11 tok/s
Qwen 3 14B 14B Q4_K_M 12 GB CPU Offload ~11 tok/s
StarCoder2 15B 15B Q8_0 17 GB CPU Offload ~8 tok/s
Codestral 22B 22B Q4_K_M 14.7 GB CPU Offload ~9 tok/s
Devstral 24B 24B Q4_K_M 17 GB CPU Offload ~8 tok/s
Magistral Small 24B 24B Q4_K_M 17 GB CPU Offload ~8 tok/s
Mistral Small 3.1 24B 24B Q4_K_M 18 GB CPU Offload ~8 tok/s
Gemma 2 27B 27B Q4_K_M 17.7 GB CPU Offload ~8 tok/s
29 model(s) are too large for this hardware.