| LLM Name | Sft Tldr Pythia 1 4b |
| Repository ๐ค | https://huggingface.co/GitBag/sft_tldr_pythia_1_4b |
| Model Size | 4b |
| Required VRAM | 5.7 GB |
| Updated | 2025-09-23 |
| Maintainer | GitBag |
| Model Type | gpt_neox |
| Model Files | |
| Model Architecture | GPTNeoXForCausalLM |
| Context Length | 2048 |
| Model Max Length | 2048 |
| Transformers Version | 4.39.3 |
| Tokenizer Class | GPTNeoXTokenizer |
| Padding Token | [PAD] |
| Vocabulary Size | 50304 |
| Torch Data Type | float32 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Bilingual GPT Neox 4B | 2K / 7.7 GB | 4035 | 27 |
| ...al GPT Neox 4B Instruction Ppo | 2K / 7.7 GB | 1466 | 14 |
| ...al GPT Neox 4B Instruction Sft | 2K / 7.6 GB | 1436 | 17 |
| Bilingual GPT Neox 4B 8K | 2K / 7.7 GB | 164 | 22 |
| StellarX 4B V0.2 | 2K / 16 GB | 1917 | 2 |
| StellarX 4B V0 | 2K / 8.1 GB | 1920 | 1 |
| Tora 4B | 2K / 7.6 GB | 5 | 2 |
| ...x 4B Instruction Sft En Ja 84K | 2K / 7.6 GB | 6 | 1 |
| StellarX 4B V0.2 GPTQ | 2K / 1.8 GB | 15 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐