Skip to content

MacBook Air M3 16GB

Apple · M3 · 16GB Unified Memory · Can run 49 models

Buy Apple Amazon
Manufacturer Apple
Unified Mem 16 GB
Chip M3
CPU Cores 8
GPU Cores 10
Neural Engine 16
Bandwidth 100 GB/s
MSRP $1,099
Released Mar 8, 2024

AI Notes

The MacBook Air M3 16GB is a fanless, portable option for running local AI models. With 16GB of unified memory, it handles 7B models well and can run 13B models with quantization. The 100 GB/s memory bandwidth is lower than M4, resulting in slightly slower token generation, but it remains capable for on-the-go AI work.

Compatible Models

Model Parameters Best Quant VRAM Used Fit Est. Speed
Qwen 3 0.6B 600M Q4_K_M 2.5 GB Runs ~40 tok/s
Gemma 3 1B 1B Q8_0 2 GB Runs ~50 tok/s
Llama 3.2 1B 1B Q8_0 3 GB Runs ~33 tok/s
DeepSeek R1 1.5B 1.5B Q8_0 3 GB Runs ~33 tok/s
Gemma 2 2B 2B Q8_0 4 GB Runs ~25 tok/s
Gemma 3n E2B 2B Q4_K_M 3.3 GB Runs ~30 tok/s
Llama 3.2 3B 3B Q8_0 5 GB Runs ~20 tok/s
Phi-3 Mini 3.8B 3.8B Q8_0 5.8 GB Runs ~17 tok/s
Phi-4 Mini 3.8B 3.8B Q4_K_M 4.5 GB Runs ~22 tok/s
Gemma 3 4B 4B Q4_K_M 5 GB Runs ~20 tok/s
Gemma 3n E4B 4B Q4_K_M 4.5 GB Runs ~22 tok/s
Qwen 3 4B 4B Q4_K_M 4.5 GB Runs ~22 tok/s
DeepSeek R1 7B 7B Q8_0 9 GB Runs ~11 tok/s
Falcon 3 7B 7B Q4_K_M 6.8 GB Runs ~15 tok/s
Mistral 7B 7B Q8_0 9 GB Runs ~11 tok/s
Qwen 2.5 7B 7B Q8_0 9 GB Runs ~11 tok/s
Qwen 2.5 Coder 7B 7B Q8_0 9 GB Runs ~11 tok/s
Qwen 2.5 VL 7B 7B Q4_K_M 7 GB Runs ~14 tok/s
Cogito 8B 8B Q4_K_M 7.5 GB Runs ~13 tok/s
DeepSeek R1 8B 8B Q4_K_M 7.5 GB Runs ~13 tok/s
Llama 3.1 8B 8B Q8_0 10 GB Runs ~10 tok/s
Nemotron 3 Nano 8B 8B Q4_K_M 7.5 GB Runs ~13 tok/s
Qwen 3 8B 8B Q4_K_M 7.5 GB Runs ~13 tok/s
Gemma 2 9B 9B Q8_0 11 GB Runs ~9 tok/s
Falcon 3 10B 10B Q4_K_M 8.5 GB Runs ~12 tok/s
Llama 3.2 Vision 11B 11B Q4_K_M 8.5 GB Runs ~12 tok/s
Gemma 3 12B 12B Q4_K_M 10.5 GB Runs ~10 tok/s
Mistral Nemo 12B 12B Q4_K_M 9.5 GB Runs ~11 tok/s
DeepSeek R1 14B 14B Q4_K_M 9.9 GB Runs ~10 tok/s
Phi-4 14B 14B Q4_K_M 9.9 GB Runs ~10 tok/s
Phi-4 Reasoning 14B 14B Q4_K_M 11 GB Runs ~9 tok/s
Qwen 2.5 14B 14B Q4_K_M 9.9 GB Runs ~10 tok/s
Qwen 2.5 Coder 14B 14B Q4_K_M 12 GB Runs ~8 tok/s
Qwen 3 14B 14B Q4_K_M 12 GB Runs ~8 tok/s
Codestral 22B 22B Q4_K_M 14.7 GB Runs (tight) ~7 tok/s
StarCoder2 15B 15B Q8_0 17 GB CPU Offload ~2 tok/s
Devstral 24B 24B Q4_K_M 17 GB CPU Offload ~2 tok/s
Magistral Small 24B 24B Q4_K_M 17 GB CPU Offload ~2 tok/s
Mistral Small 3.1 24B 24B Q4_K_M 18 GB CPU Offload ~2 tok/s
Gemma 2 27B 27B Q4_K_M 17.7 GB CPU Offload ~2 tok/s
Gemma 3 27B 27B Q4_K_M 20 GB CPU Offload ~2 tok/s
Qwen 3 30B-A3B (MoE) 30B Q4_K_M 22 GB CPU Offload ~2 tok/s
Cogito 32B 32B Q4_K_M 21.5 GB CPU Offload ~2 tok/s
DeepSeek R1 32B 32B Q4_K_M 20.7 GB CPU Offload ~2 tok/s
Qwen 2.5 32B 32B Q4_K_M 20.7 GB CPU Offload ~2 tok/s
Qwen 2.5 Coder 32B 32B Q4_K_M 23 GB CPU Offload ~1 tok/s
Qwen 3 32B 32B Q4_K_M 23 GB CPU Offload ~1 tok/s
QwQ 32B 32B Q4_K_M 21.5 GB CPU Offload ~2 tok/s
Command R 35B 35B Q4_K_M 22.5 GB CPU Offload ~1 tok/s
20 model(s) are too large for this hardware.