| Additional Notes |
|
| LLM Name | TinyDolphin 2.8 1.1B 4bit Mlx |
| Repository ๐ค | https://huggingface.co/mlx-community/TinyDolphin-2.8-1.1b-4bit-mlx |
| Model Size | 1.1b |
| Required VRAM | 0.8 GB |
| Updated | 2025-09-23 |
| Maintainer | mlx-community |
| Model Type | llama |
| Model Files | |
| Supported Languages | en |
| Quantization Type | 4bit |
| Model Architecture | LlamaForCausalLM |
| License | apache-2.0 |
| Context Length | 4096 |
| Model Max Length | 4096 |
| Transformers Version | 4.36.2 |
| Padding Token | </s> |
| Vocabulary Size | 32002 |
| Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| ...E TinyLlama 1.1B 32K Base 4bit | 32K / 0.6 GB | 30 | 1 |
| ...E TinyLlama 1.1B 32K Base 8bit | 32K / 1.2 GB | 10 | 1 |
| ...1B 32K Instruct 3.0bpw H6 EXL2 | 32K / 0.5 GB | 10 | 0 |
| ...1B 32K Instruct 8.0bpw H8 EXL2 | 32K / 1.2 GB | 0 | 1 |
| Athena TinyLlama V0.1 | 16K / 2.2 GB | 14 | 0 |
| ...lama PY CODER 4bit Lora 4k V12 | 4K / 2.2 GB | 5 | 0 |
| Tinyllama Coder Py V14 | 4K / 2.2 GB | 7 | 0 |
| ...icipal Prediction Merged Model | 4K / 2.2 GB | 6 | 0 |
| TiamaPY V27 | 4K / 2.2 GB | 5 | 0 |
| TiamaPY 1.1B V24 | 4K / 2.2 GB | 5 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐