Skip to content

Mac Studio M4 Ultra 192GB

Apple · M4 Ultra · 192GB Unified Memory · Can run 66 models

Buy Apple Amazon
Manufacturer Apple
Unified Mem 192 GB
Chip M4 Ultra
CPU Cores 32
GPU Cores 80
Neural Engine 32
Bandwidth 819 GB/s
MSRP $5,999
Released Mar 12, 2025

AI Notes

The Mac Studio M4 Ultra 192GB is the most powerful consumer Apple Silicon machine for AI. With 192GB of unified memory and 819 GB/s bandwidth, it can run full-precision 70B models and even load 100B+ models with quantization. The M4 Ultra's doubled memory bandwidth delivers exceptional token generation rates for the largest local AI models.

Compatible Models

Model Parameters Best Quant VRAM Used Fit Est. Speed
Qwen 3 0.6B 600M Q4_K_M 2.5 GB Runs ~328 tok/s
Gemma 3 1B 1B Q8_0 2 GB Runs ~410 tok/s
Llama 3.2 1B 1B Q8_0 3 GB Runs ~273 tok/s
DeepSeek R1 1.5B 1.5B Q8_0 3 GB Runs ~273 tok/s
Gemma 2 2B 2B Q8_0 4 GB Runs ~205 tok/s
Gemma 3n E2B 2B Q4_K_M 3.3 GB Runs ~248 tok/s
Llama 3.2 3B 3B Q8_0 5 GB Runs ~164 tok/s
Phi-3 Mini 3.8B 3.8B Q8_0 5.8 GB Runs ~141 tok/s
Phi-4 Mini 3.8B 3.8B Q4_K_M 4.5 GB Runs ~182 tok/s
Gemma 3 4B 4B Q4_K_M 5 GB Runs ~164 tok/s
Gemma 3n E4B 4B Q4_K_M 4.5 GB Runs ~182 tok/s
Qwen 3 4B 4B Q4_K_M 4.5 GB Runs ~182 tok/s
DeepSeek R1 7B 7B Q8_0 9 GB Runs ~91 tok/s
Falcon 3 7B 7B Q4_K_M 6.8 GB Runs ~120 tok/s
Mistral 7B 7B Q8_0 9 GB Runs ~91 tok/s
Qwen 2.5 7B 7B Q8_0 9 GB Runs ~91 tok/s
Qwen 2.5 Coder 7B 7B Q8_0 9 GB Runs ~91 tok/s
Qwen 2.5 VL 7B 7B Q4_K_M 7 GB Runs ~117 tok/s
Cogito 8B 8B Q4_K_M 7.5 GB Runs ~109 tok/s
DeepSeek R1 8B 8B Q4_K_M 7.5 GB Runs ~109 tok/s
Llama 3.1 8B 8B Q8_0 10 GB Runs ~82 tok/s
Nemotron 3 Nano 8B 8B Q4_K_M 7.5 GB Runs ~109 tok/s
Qwen 3 8B 8B Q4_K_M 7.5 GB Runs ~109 tok/s
Gemma 2 9B 9B Q8_0 11 GB Runs ~74 tok/s
Falcon 3 10B 10B Q4_K_M 8.5 GB Runs ~96 tok/s
Llama 3.2 Vision 11B 11B Q4_K_M 8.5 GB Runs ~96 tok/s
Gemma 3 12B 12B Q4_K_M 10.5 GB Runs ~78 tok/s
Mistral Nemo 12B 12B Q4_K_M 9.5 GB Runs ~86 tok/s
DeepSeek R1 14B 14B Q4_K_M 9.9 GB Runs ~83 tok/s
Phi-4 14B 14B Q4_K_M 9.9 GB Runs ~83 tok/s
Phi-4 Reasoning 14B 14B Q4_K_M 11 GB Runs ~74 tok/s
Qwen 2.5 14B 14B Q4_K_M 9.9 GB Runs ~83 tok/s
Qwen 2.5 Coder 14B 14B Q4_K_M 12 GB Runs ~68 tok/s
Qwen 3 14B 14B Q4_K_M 12 GB Runs ~68 tok/s
StarCoder2 15B 15B Q8_0 17 GB Runs ~48 tok/s
Codestral 22B 22B Q4_K_M 14.7 GB Runs ~56 tok/s
Devstral 24B 24B Q4_K_M 17 GB Runs ~48 tok/s
Magistral Small 24B 24B Q4_K_M 17 GB Runs ~48 tok/s
Mistral Small 3.1 24B 24B Q4_K_M 18 GB Runs ~46 tok/s
Gemma 2 27B 27B Q4_K_M 17.7 GB Runs ~46 tok/s
Gemma 3 27B 27B Q4_K_M 20 GB Runs ~41 tok/s
Qwen 3 30B-A3B (MoE) 30B Q4_K_M 22 GB Runs ~37 tok/s
Cogito 32B 32B Q4_K_M 21.5 GB Runs ~38 tok/s
DeepSeek R1 32B 32B Q4_K_M 20.7 GB Runs ~40 tok/s
Qwen 2.5 32B 32B Q4_K_M 20.7 GB Runs ~40 tok/s
Qwen 2.5 Coder 32B 32B Q4_K_M 23 GB Runs ~36 tok/s
Qwen 3 32B 32B Q4_K_M 23 GB Runs ~36 tok/s
QwQ 32B 32B Q4_K_M 21.5 GB Runs ~38 tok/s
Command R 35B 35B Q4_K_M 22.5 GB Runs ~36 tok/s
Mixtral 8x7B 47B Q4_K_M 29.7 GB Runs ~28 tok/s
Cogito 70B 70B Q4_K_M 43 GB Runs ~19 tok/s
DeepSeek R1 70B 70B Q4_K_M 43.5 GB Runs ~19 tok/s
Llama 3.1 70B 70B Q4_K_M 43.5 GB Runs ~19 tok/s
Llama 3.3 70B 70B Q4_K_M 43.5 GB Runs ~19 tok/s
Qwen 2.5 72B 72B Q4_K_M 44.7 GB Runs ~18 tok/s
Qwen 2.5 VL 72B 72B Q4_K_M 41 GB Runs ~20 tok/s
Llama 3.2 Vision 90B 90B Q4_K_M 50 GB Runs ~16 tok/s
Command R+ 104B 104B Q4_K_M 57 GB Runs ~14 tok/s
Llama 4 Scout (109B/17B active) 109B Q4_K_M 72 GB Runs ~11 tok/s
Command A 111B 111B Q4_K_M 61 GB Runs ~13 tok/s
Devstral 2 123B 123B Q4_K_M 67 GB Runs ~12 tok/s
Mistral Large 2 123B 123B Q4_K_M 67 GB Runs ~12 tok/s
Mixtral 8x22B 141B Q4_K_M 86 GB Runs ~10 tok/s
Qwen 3 235B-A22B 235B Q4_K_M 138 GB Runs ~6 tok/s
Llama 4 Maverick 400B Q4_K_M 228 GB CPU Offload ~1 tok/s
Llama 3.1 405B 405B Q4_K_M 244.5 GB CPU Offload ~1 tok/s
3 model(s) are too large for this hardware.