| Model Type | 
 | |||
| Additional Notes | 
 | |||
| Supported Languages | 
 | |||
| Training Details | 
 | 
| LLM Name | Luxia 21.4B Alignment V1.0 | 
| Repository ๐ค | https://huggingface.co/saltlux/luxia-21.4b-alignment-v1.0 | 
| Model Size | 21.4b | 
| Required VRAM | 42.8 GB | 
| Updated | 2025-09-23 | 
| Maintainer | saltlux | 
| Model Type | llama | 
| Model Files | |
| Supported Languages | en | 
| Model Architecture | LlamaForCausalLM | 
| License | apache-2.0 | 
| Context Length | 32768 | 
| Model Max Length | 32768 | 
| Transformers Version | 4.37.2 | 
| Is Biased | 0 | 
| Tokenizer Class | LlamaTokenizer | 
| Padding Token | </s> | 
| Vocabulary Size | 92544 | 
| Torch Data Type | bfloat16 | 
| Model | Likes | Downloads | VRAM | 
|---|---|---|---|
| ...uxia 21.4B Alignment V1.0 GGUF | 9 | 154 | 8 GB | 
| Best Alternatives | Context / RAM | Downloads | Likes | 
|---|---|---|---|
| Luxia 21.4B Alignment V1.2 | 32K / 43 GB | 9654 | 8 | 
| UNA ThePitbull 21.4B V2 | 32K / 43 GB | 8 | 16 | 
| Final Model Test V2 | 32K / 43 GB | 5 | 0 | 
| DPO Model Test1 | 32K / 43 GB | 5 | 0 | 
| Sft Model Test1 | 32K / 43 GB | 7 | 0 | 
| UNA ThePitbull 21.4 V1 | 32K / 43 GB | 6 | 5 | 
| Merge Model Test2 | 32K / 42.8 GB | 8 | 0 | 
| Merge Model Test1 | 32K / 42.8 GB | 6 | 0 | 
| Sft Model Test0 | 32K / 43 GB | 7 | 0 | 
| Alignment Model Test9 | 32K / 43 GB | 5 | 0 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐