Mac Studio M4 Max 36GB
Apple · M4 Max · 36GB Unified Memory · Can run 41 models
Buy Apple
| Manufacturer | Apple |
| Unified Mem | 36 GB |
| Chip | M4 Max |
| CPU Cores | 14 |
| GPU Cores | 30 |
| Neural Engine | 16 |
| Bandwidth | 546 GB/s |
| MSRP | $1,999 |
| Released | Mar 12, 2025 |
AI Notes
The Mac Studio M4 Max 36GB delivers incredible bandwidth at 546 GB/s, making smaller models fly. It runs 13B models at very fast speeds and can handle 30B models at moderate-to-fast speeds. The 36GB memory limits larger models, but the bandwidth advantage makes it excellent for smaller model inference.
Compatible Models
| Model | Parameters | Best Quant | VRAM Used | Fit | Est. Speed |
|---|---|---|---|---|---|
| Gemma 3 1B | 1B | Q8_0 | 2 GB | Runs | ~273 tok/s |
| Llama 3.2 1B | 1B | Q8_0 | 3 GB | Runs | ~182 tok/s |
| DeepSeek R1 1.5B | 1.5B | Q8_0 | 3 GB | Runs | ~182 tok/s |
| Gemma 2 2B | 2B | Q8_0 | 4 GB | Runs | ~137 tok/s |
| Llama 3.2 3B | 3B | Q8_0 | 5 GB | Runs | ~109 tok/s |
| Phi-3 Mini 3.8B | 3.8B | Q8_0 | 5.8 GB | Runs | ~94 tok/s |
| Phi-4 Mini 3.8B | 3.8B | Q4_K_M | 4.5 GB | Runs | ~121 tok/s |
| Gemma 3 4B | 4B | Q4_K_M | 5 GB | Runs | ~109 tok/s |
| Qwen 3 4B | 4B | Q4_K_M | 4.5 GB | Runs | ~121 tok/s |
| DeepSeek R1 7B | 7B | Q8_0 | 9 GB | Runs | ~61 tok/s |
| Mistral 7B | 7B | Q8_0 | 9 GB | Runs | ~61 tok/s |
| Qwen 2.5 7B | 7B | Q8_0 | 9 GB | Runs | ~61 tok/s |
| Qwen 2.5 Coder 7B | 7B | Q8_0 | 9 GB | Runs | ~61 tok/s |
| DeepSeek R1 8B | 8B | Q4_K_M | 7.5 GB | Runs | ~73 tok/s |
| Llama 3.1 8B | 8B | Q8_0 | 10 GB | Runs | ~55 tok/s |
| Qwen 3 8B | 8B | Q4_K_M | 7.5 GB | Runs | ~73 tok/s |
| Gemma 2 9B | 9B | Q8_0 | 11 GB | Runs | ~50 tok/s |
| Gemma 3 12B | 12B | Q4_K_M | 10.5 GB | Runs | ~52 tok/s |
| Mistral Nemo 12B | 12B | Q4_K_M | 9.5 GB | Runs | ~57 tok/s |
| DeepSeek R1 14B | 14B | Q4_K_M | 9.9 GB | Runs | ~55 tok/s |
| Phi-4 14B | 14B | Q4_K_M | 9.9 GB | Runs | ~55 tok/s |
| Qwen 2.5 14B | 14B | Q4_K_M | 9.9 GB | Runs | ~55 tok/s |
| Qwen 2.5 Coder 14B | 14B | Q4_K_M | 12 GB | Runs | ~46 tok/s |
| Qwen 3 14B | 14B | Q4_K_M | 12 GB | Runs | ~46 tok/s |
| StarCoder2 15B | 15B | Q8_0 | 17 GB | Runs | ~32 tok/s |
| Codestral 22B | 22B | Q4_K_M | 14.7 GB | Runs | ~37 tok/s |
| Devstral 24B | 24B | Q4_K_M | 17 GB | Runs | ~32 tok/s |
| Mistral Small 3.1 24B | 24B | Q4_K_M | 18 GB | Runs | ~30 tok/s |
| Gemma 2 27B | 27B | Q4_K_M | 17.7 GB | Runs | ~31 tok/s |
| Gemma 3 27B | 27B | Q4_K_M | 20 GB | Runs | ~27 tok/s |
| Qwen 3 30B-A3B (MoE) | 30B | Q4_K_M | 22 GB | Runs | ~25 tok/s |
| DeepSeek R1 32B | 32B | Q4_K_M | 20.7 GB | Runs | ~26 tok/s |
| Qwen 2.5 32B | 32B | Q4_K_M | 20.7 GB | Runs | ~26 tok/s |
| Qwen 2.5 Coder 32B | 32B | Q4_K_M | 23 GB | Runs | ~24 tok/s |
| Qwen 3 32B | 32B | Q4_K_M | 23 GB | Runs | ~24 tok/s |
| Command R 35B | 35B | Q4_K_M | 22.5 GB | Runs | ~24 tok/s |
| Mixtral 8x7B | 47B | Q4_K_M | 29.7 GB | Runs | ~18 tok/s |
| DeepSeek R1 70B | 70B | Q4_K_M | 43.5 GB | CPU Offload | ~13 tok/s |
| Llama 3.1 70B | 70B | Q4_K_M | 43.5 GB | CPU Offload | ~13 tok/s |
| Llama 3.3 70B | 70B | Q4_K_M | 43.5 GB | CPU Offload | ~13 tok/s |
| Qwen 2.5 72B | 72B | Q4_K_M | 44.7 GB | CPU Offload | ~12 tok/s |
2
model(s) are too large for this hardware.