LLM Name | Opt 13B |
Repository ๐ค | https://huggingface.co/ArthurZ/opt-13b |
Model Size | 13b |
Required VRAM | 26.3 GB |
Updated | 2025-09-23 |
Maintainer | ArthurZ |
Model Type | opt |
Model Files | |
Model Architecture | OPTForCausalLM |
Context Length | 2048 |
Model Max Length | 2048 |
Transformers Version | 4.21.0.dev0 |
Vocabulary Size | 50272 |
Torch Data Type | float16 |
Activation Function | relu |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Emoopt 13B | 2K / 25.7 GB | 239 | 0 |
OPT 13B Nerybus Mix | 2K / 26.4 GB | 1859 | 36 |
OPT 13B Erebus | 2K / 25.7 GB | 2440 | 253 |
OPT 13B Nerys V2 | 2K / 25.7 GB | 1801 | 12 |
Opt 13B | 2K / 25.8 GB | 15254 | 66 |
Basic Facebook 13B | 2K / 25.8 GB | 15 | 1 |
OPT 13B Nerybus Mix 4bit 128g | 2K / 7.3 GB | 1820 | 6 |
OPT 13B Erebus 4bit 128g | 2K / 7.3 GB | 9 | 17 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐