Fbopt 350M 8bit is an open-source language model by yec019. Features: 350m LLM, VRAM: 0.4GB, Context: 2K, License: unknown, Quantized, HF Score: 30.2, LLM Explorer Score: 0.19, Arc: 23.6, HellaSwag: 36.6, MMLU: 26.2, TruthfulQA: 41, WinoGrande: 52.6, GSM8K: 1.3.
| Additional Notes |
|
| LLM Name | Fbopt 350M 8bit |
| Repository ๐ค | https://huggingface.co/yec019/fbopt-350m-8bit |
| Model Size | 350m |
| Required VRAM | 0.4 GB |
| Updated | 2026-03-29 |
| Maintainer | yec019 |
| Model Type | opt |
| Model Files | |
| Quantization Type | 8bit |
| Model Architecture | OPTForCausalLM |
| License | unknown |
| Context Length | 2048 |
| Model Max Length | 2048 |
| Transformers Version | 4.36.0.dev0 |
| Tokenizer Class | GPT2Tokenizer |
| Padding Token | <pad> |
| Vocabulary Size | 50272 |
| Torch Data Type | float16 |
| Activation Function | relu |
| Errors | replace |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Ov Opt 350M 8bit Kv Cache | 2K / 0.4 GB | 1 | 1 |
| Opt Mini Dataset 0 | 2K / 0.7 GB | 5 | 0 |
| Facebook Opt 350M SFT Korz14 | 2K / 0.7 GB | 5 | 0 |
| Opt 350M | 2K / 0.7 GB | 174097 | 149 |
| Temp Model Sft | 2K / 1.3 GB | 5 | 0 |
| Gpt350 Chat S V0 | 2K / 0.7 GB | 6 | 0 |
| Gpt350 Chat S V0 1 | 2K / 0.7 GB | 5 | 0 |
| Dadjokes Tuned Opt | 2K / 1.3 GB | 7 | 2 |
| Pygmalion 350M | 2K / 1.3 GB | 907 | 54 |
| Rockyalquimista888 | 2K / 1.3 GB | 5 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐