Codestral Mamba 7B
by Mistral AI · mistral family
7B
parameters
code-generation text-generation
Codestral Mamba 7B is Mistral AI's code generation model built on the Mamba state-space architecture instead of the traditional transformer. The Mamba architecture enables linear-time inference, making it exceptionally fast for code completion and generation tasks with theoretically unlimited context length. Unlike transformer-based models that slow down quadratically with longer inputs, Codestral Mamba maintains consistent speed regardless of sequence length. This makes it ideal for processing large codebases and long files where low latency is critical for developer productivity.
Quick Start with Ollama
ollama run 7b-q4_K_M Resources
Ollama
| Creator | Mistral AI |
| Parameters | 7B |
| Architecture | mamba |
| Context | 256K tokens |
| Released | Jul 16, 2024 |
| License | Apache 2.0 |
| Ollama | codestral-mamba:7b |
Quantization Options
| Format | File Size | VRAM Required | Quality | Ollama Tag |
|---|---|---|---|---|
| Q4_K_M rec | 4.4 GB | 6.9 GB | | 7b-q4_K_M |
| Q8_0 | 7.4 GB | 9.9 GB | | 7b-q8_0 |
| F16 | 14 GB | 17 GB | | 7b-fp16 |
Compatible Hardware
Q4_K_M requires 6.9 GB VRAM