Mellum 4B Base 4bit is an open-source language model by mlx-community. Features: 4b LLM, VRAM: 2.3GB, Context: 8K, License: apache-2.0, Quantized, LLM Explorer Score: 0.2.
| LLM Name | Mellum 4B Base 4bit |
| Repository 🤗 | https://huggingface.co/mlx-community/Mellum-4b-base-4bit |
| Base Model(s) | |
| Model Size | 4b |
| Required VRAM | 2.3 GB |
| Updated | 2026-04-06 |
| Maintainer | mlx-community |
| Model Type | llama |
| Model Files | |
| Quantization Type | 4bit |
| Model Architecture | LlamaForCausalLM |
| License | apache-2.0 |
| Context Length | 8192 |
| Model Max Length | 8192 |
| Transformers Version | 4.51.3 |
| Tokenizer Class | GPT2Tokenizer |
| Padding Token | <|endoftext|> |
| Vocabulary Size | 98304 |
| Torch Data Type | bfloat16 |
| Errors | replace |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| ...Nemotron Nano 4B V1.1 Bnb 4bit | 128K / 3.5 GB | 1553 | 0 |
| ...a3.2 ColdBrew 4B Discovery F16 | 128K / 7.2 GB | 6 | 0 |
| 4Bcpt | 256K / 8.8 GB | 5 | 0 |
| HoldMy4BKTO | 256K / 8.8 GB | 5 | 0 |
| Xgen Small 4B Instruct R | 256K / 17.7 GB | 95 | 4 |
| Xgen Small 4B Base R | 256K / 17.7 GB | 17 | 2 |
| SJT 4B | 146K / 7.6 GB | 5 | 0 |
| ...lama 3.1 Nemotron Nano 4B V1.1 | 128K / 9 GB | 20991 | 113 |
| Impish LLAMA 4B | 128K / 9 GB | 1133 | 42 |
| Nemotron W 4b MagLight 0.1 | 128K / 9.2 GB | 13 | 3 |
🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟