| Model Type | 
 | ||||||||||||
| Additional Notes | 
 | ||||||||||||
| Training Details | 
 | 
| LLM Name | Llama 3 8B Magpie Pro SFT 200K V0.1 | 
| Repository ๐ค | https://huggingface.co/Magpie-Align/Llama-3-8B-Magpie-Pro-SFT-200K-v0.1 | 
| Base Model(s) | |
| Model Size | 8b | 
| Required VRAM | 16.1 GB | 
| Updated | 2025-10-13 | 
| Maintainer | Magpie-Align | 
| Model Type | llama | 
| Model Files | |
| Model Architecture | LlamaForCausalLM | 
| License | llama3 | 
| Context Length | 8192 | 
| Model Max Length | 8192 | 
| Transformers Version | 4.40.2 | 
| Tokenizer Class | PreTrainedTokenizerFast | 
| Padding Token | <|end_of_text|> | 
| Vocabulary Size | 128256 | 
| Torch Data Type | bfloat16 | 
| Best Alternatives | Context / RAM | Downloads | Likes | 
|---|---|---|---|
| ...otron 8B UltraLong 4M Instruct | 4192K / 32.1 GB | 1531 | 120 | 
| ...a 3.1 8B UltraLong 4M Instruct | 4192K / 32.1 GB | 176 | 24 | 
| UltraLong Thinking | 4192K / 16.1 GB | 45 | 3 | 
| ...otron 8B UltraLong 2M Instruct | 2096K / 32.1 GB | 1142 | 15 | 
| ...a 3.1 8B UltraLong 2M Instruct | 2096K / 32.1 GB | 875 | 9 | 
| ...otron 8B UltraLong 1M Instruct | 1048K / 32.1 GB | 7142 | 52 | 
| ...a 3.1 8B UltraLong 1M Instruct | 1048K / 32.1 GB | 1387 | 29 | 
| ...xis Bookwriter Llama3.1 8B Sft | 1048K / 16.1 GB | 40 | 4 | 
| Zero Llama 3.1 8B Beta6 | 1048K / 16.1 GB | 2 | 1 | 
| ...dger Nu Llama 3.1 8B UltraLong | 1048K / 16.2 GB | 5 | 3 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐