LLM Name | Opt 13B |
Repository ๐ค | https://huggingface.co/ArthurZ/opt-13b |
Model Size | 13b |
Required VRAM | 26.3 GB |
Updated | 2025-08-17 |
Maintainer | ArthurZ |
Model Type | opt |
Model Files | |
Model Architecture | OPTForCausalLM |
Context Length | 2048 |
Model Max Length | 2048 |
Transformers Version | 4.21.0.dev0 |
Vocabulary Size | 50272 |
Torch Data Type | float16 |
Activation Function | relu |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Emoopt 13B | 2K / 25.7 GB | 17 | 0 |
OPT 13B Nerybus Mix | 2K / 26.4 GB | 1963 | 36 |
OPT 13B Erebus | 2K / 25.7 GB | 6327 | 249 |
OPT 13B Nerys V2 | 2K / 25.7 GB | 4340 | 12 |
Opt 13B | 2K / 25.8 GB | 15830 | 66 |
Basic Facebook 13B | 2K / 25.8 GB | 15 | 1 |
OPT 13B Nerybus Mix 4bit 128g | 2K / 7.3 GB | 1786 | 6 |
OPT 13B Erebus 4bit 128g | 2K / 7.3 GB | 35 | 17 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐