| LLM Name | Fimbulvetr 11B Attention V0.1 Test | 
| Repository ๐ค | https://huggingface.co/TheHierophant/Fimbulvetr-11B-Attention-V0.1-test | 
| Base Model(s) | |
| Merged Model | Yes | 
| Model Size | 11b | 
| Required VRAM | 21.4 GB | 
| Updated | 2025-07-16 | 
| Maintainer | TheHierophant | 
| Model Type | llama | 
| Model Files | |
| Model Architecture | LlamaForCausalLM | 
| Context Length | 4096 | 
| Model Max Length | 4096 | 
| Transformers Version | 4.46.2 | 
| Tokenizer Class | LlamaTokenizer | 
| Vocabulary Size | 32000 | 
| Torch Data Type | bfloat16 | 
| Best Alternatives | Context / RAM | Downloads | Likes | 
|---|---|---|---|
| A6.1 | 1024K / 16.1 GB | 130 | 0 | 
| ...ral 11B Omni OP 1K 2048 Ver0.1 | 32K / 21.4 GB | 5 | 0 | 
| MIstral 11B Omni OP U1k Ver0.1 | 32K / 21.4 GB | 5 | 0 | 
| Llama 3 Synatra 11B V1 20K | 20K / 23 GB | 6 | 9 | 
| Fimbulvetr 11B V2.1 16K | 16K / 21.4 GB | 9 | 17 | 
| Moistral 11B V2 | 8K / 21.4 GB | 59 | 21 | 
| Narumashi 11B V0.9 | 8K / 21.4 GB | 54 | 1 | 
| Moistral 11B V3 | 8K / 21.4 GB | 67 | 108 | 
| Moistral 11B V5d E4 | 8K / 21.4 GB | 15 | 1 | 
| Moistral 11B V5c | 8K / 21.4 GB | 11 | 1 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐