Dolphin Mixtral 8x7B
by Cognitive Computations · dolphin family
47B
parameters
text-generation code-generation reasoning multilingual creative-writing
Dolphin Mixtral 8x7B is an uncensored, instruction-tuned mixture-of-experts model built on Mistral's Mixtral architecture. With 47 billion total parameters and approximately 12 billion active per inference, it delivers strong performance across text generation, coding, and multilingual tasks without alignment restrictions. This model combines the powerful MoE architecture of Mixtral with Dolphin's uncensored training approach, making it a popular choice for researchers and developers who need high-quality outputs without content filtering. It requires significant VRAM but rewards users with excellent reasoning and creative writing capabilities.
Quick Start with Ollama
ollama run 8x7b-q4_K_M Resources
Ollama
| Creator | Cognitive Computations |
| Parameters | 47B |
| Architecture | mixture-of-experts |
| Context | 32K tokens |
| Released | Jan 20, 2024 |
| License | Apache 2.0 |
| Ollama | dolphin-mixtral:8x7b |
Quantization Options
| Format | File Size | VRAM Required | Quality | Ollama Tag |
|---|---|---|---|---|
| Q4_K_M rec | 24 GB | 26 GB | | 8x7b-q4_K_M |
| Q8_0 | 47 GB | 49 GB | | 8x7b-q8_0 |
| F16 | 90 GB | 94 GB | | 8x7b-fp16 |
Compatible Hardware
Q4_K_M requires 26 GB VRAM
Benchmark Scores
70.0
mmlu