Mistral Large 2 123B
by Mistral AI · mistral family
123B
parameters
text-generation code-generation reasoning multilingual math tool-use creative-writing summarization
Mistral Large 2 is Mistral AI's flagship 123 billion parameter model, delivering significant improvements in code generation, mathematics, reasoning, and multilingual support across dozens of languages. It features a massive 128K context window and strong instruction-following capabilities. The model excels at complex reasoning tasks, tool use, and function calling, making it well-suited for agentic workflows. It competes with the largest frontier models while being available under an open research license.
Quick Start with Ollama
ollama run q4_K_M | Creator | Mistral AI |
| Parameters | 123B |
| Architecture | transformer-decoder |
| Context | 128K tokens |
| Released | Jul 24, 2024 |
| License | Mistral Research License |
| Ollama | mistral-large |
Quantization Options
| Format | File Size | VRAM Required | Quality | Ollama Tag |
|---|---|---|---|---|
| Q4_K_M rec | 62 GB | 67 GB | | q4_K_M |
| Q8_0 | 123 GB | 129 GB | | q8_0 |
Compatible Hardware
Q4_K_M requires 67 GB VRAM
Benchmark Scores
84.0
mmlu