Nous Hermes 2 34B
by Nous Research · nous-hermes family
34B
parameters
text-generation code-generation reasoning creative-writing summarization
Nous Hermes 2 34B is the flagship model from Nous Research, fine-tuned for strong general-purpose performance across reasoning, code generation, and creative writing tasks. Built on a 34B parameter base, it delivers high-quality instruction following with excellent benchmark results. As one of the most popular community fine-tunes, Nous Hermes 2 34B strikes a balance between capability and accessibility. It is well-suited for users who need a powerful local model for complex reasoning and long-form content generation.
Quick Start with Ollama
ollama run 34b-q4_K_M Resources
Ollama
| Creator | Nous Research |
| Parameters | 34B |
| Architecture | transformer-decoder |
| Context | 4K tokens |
| Released | Jan 15, 2024 |
| License | Apache 2.0 |
| Ollama | nous-hermes2:34b |
Quantization Options
| Format | File Size | VRAM Required | Quality | Ollama Tag |
|---|---|---|---|---|
| Q4_K_M rec | 17 GB | 19 GB | | 34b-q4_K_M |
| Q8_0 | 34 GB | 36 GB | | 34b-q8_0 |
| F16 | 68 GB | 70 GB | | 34b-fp16 |
Compatible Hardware
Q4_K_M requires 19 GB VRAM
Benchmark Scores
75.0
mmlu