Phi-4 14B
by Microsoft · phi family
14B
parameters
text-generation code-generation reasoning math summarization
Phi-4 14B is Microsoft's latest small language model that achieves remarkable performance relative to its size, particularly excelling in mathematical reasoning and STEM tasks. Built with synthetic data generation and innovative training curricula, it often matches or exceeds models several times its size. The model demonstrates exceptional strength in structured problem solving, coding, and analytical tasks. Its 16K context window and MIT license make it highly attractive for both personal and commercial local deployments.
Quick Start with Ollama
ollama run 14b-q4_K_M Quantization Options
| Format | File Size | VRAM Required | Quality | Ollama Tag |
|---|---|---|---|---|
| Q4_K_M recommended | 7.1 GB | 9.9 GB |
★
★
★
★
★
| 14b-q4_K_M |
| Q5_K_M | 8.2 GB | 11.3 GB |
★
★
★
★
★
| 14b-q5_K_M |
| Q8_0 | 12.6 GB | 16 GB |
★
★
★
★
★
| 14b-q8_0 |
Compatible Hardware for Q4_K_M
Showing compatibility for the recommended quantization (Q4_K_M, 9.9 GB VRAM).
Compatible Hardware
Benchmark Scores
84.8
mmlu