Magistral Small 24B
by Mistral AI · mistral family
24B
parameters
text-generation code-generation reasoning math multilingual
Magistral Small 24B is Mistral AI's reasoning-focused model designed for complex problem-solving with transparent chain-of-thought capabilities. It delivers strong performance on mathematical reasoning, code generation, and multilingual tasks. With its 131K context window and efficient 24B parameter count, Magistral Small strikes a balance between reasoning capability and resource requirements, making it accessible for local deployment on mid-range hardware.
Quick Start with Ollama
ollama run q4_K_M | Creator | Mistral AI |
| Parameters | 24B |
| Architecture | transformer-decoder |
| Context | 128K tokens |
| Released | Jun 18, 2025 |
| License | Apache 2.0 |
| Ollama | magistral |
Quantization Options
| Format | File Size | VRAM Required | Quality | Ollama Tag |
|---|---|---|---|---|
| Q4_K_M rec | 14 GB | 17 GB | | q4_K_M |
| Q8_0 | 25 GB | 28 GB | | q8_0 |
| F16 | 48 GB | 52 GB | | fp16 |
Compatible Hardware
Q4_K_M requires 17 GB VRAM
Benchmark Scores
77.0
mmlu