| Additional Notes |
| |||||||||
| Training Details |
| |||||||||
| Input Output |
|
| LLM Name | Facebook Opt 125M HQQ 1bit Smashed |
| Repository ๐ค | https://huggingface.co/PrunaAI/facebook-opt-125m-HQQ-1bit-smashed |
| Model Size | 125m |
| Required VRAM | 0.1 GB |
| Updated | 2025-09-23 |
| Maintainer | PrunaAI |
| Model Type | opt |
| Model Files | |
| Quantization Type | 1bit |
| Model Architecture | OPTForCausalLM |
| Context Length | 2048 |
| Model Max Length | 2048 |
| Transformers Version | 4.37.1 |
| Vocabulary Size | 50272 |
| Torch Data Type | float16 |
| Activation Function | relu |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Opt 125M Bnb 4bit | 2K / 0.1 GB | 6731 | 2 |
| ...book Opt 125M HQQ 2bit Smashed | 2K / 0.1 GB | 5 | 1 |
| ...125M Qcqa Ub 6 Best For Q Loss | 2K / 0.5 GB | 1718 | 0 |
| ...5M Qcqa Ub 6 Best For KV Cache | 2K / 0.5 GB | 1556 | 0 |
| ...25M Gqa Ub 6 Best For KV Cache | 2K / 0.5 GB | 716 | 0 |
| Opt 125M | 2K / 0.3 GB | 3959825 | 222 |
| Opt 125M Gptq | 2K / 0.1 GB | 3637 | 0 |
| Opt 125M Gptq | 2K / 0.1 GB | 5 | 0 |
| Quantized Opt 125M | 2K / 0.2 GB | 6 | 0 |
| Galactica 125M Cot | 2K / 0.5 GB | 5 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐