| Model Type | 
 | |||||||||||||||
| Use Cases | 
 | |||||||||||||||
| Additional Notes | 
 | |||||||||||||||
| Training Details | 
 | |||||||||||||||
| Input Output | 
 | 
| LLM Name | FluffyKaeloky Luminum V0.1 123B EXL2 2.7bpw H6 | 
| Repository ๐ค | https://huggingface.co/BigHuggyD/FluffyKaeloky_Luminum-v0.1-123B_exl2_2.7bpw_h6 | 
| Model Size | 123b | 
| Required VRAM | 42.3 GB | 
| Updated | 2025-09-23 | 
| Maintainer | BigHuggyD | 
| Model Type | mistral | 
| Model Files | |
| Quantization Type | exl2 | 
| Model Architecture | MistralForCausalLM | 
| Context Length | 131072 | 
| Model Max Length | 131072 | 
| Transformers Version | 4.44.2 | 
| Tokenizer Class | LlamaTokenizer | 
| Vocabulary Size | 32768 | 
| Torch Data Type | bfloat16 | 
| Best Alternatives | Context / RAM | Downloads | Likes | 
|---|---|---|---|
| Soar Mistral 123B | 128K / 217.2 GB | 14 | 1 | 
| Behemoth R1 123B V2 | 128K / 226.4 GB | 1159 | 20 | 
| Behemoth X 123B V2 | 128K / 217.2 GB | 763 | 16 | 
| Behemoth ReduX 123B V1 | 128K / 217.2 GB | 346 | 8 | 
| ML2 123B Magnum Diamond | 128K / 212.4 GB | 31 | 8 | 
| Gigaberg Mistral Large 123B | 128K / 222 GB | 5 | 2 | 
| Monstral 123B V2 | 128K / 245.4 GB | 86 | 33 | 
| Behemoth 123B V1.2 | 128K / 226.4 GB | 99 | 27 | 
| Behemoth 123B V2.2 | 128K / 226.4 GB | 7 | 26 | 
| Behemoth 123B V2.1 | 128K / 226.4 GB | 65 | 10 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐