| LLM Name | Etheria 55B V0.1 GPTQ | 
| Repository ๐ค | https://huggingface.co/TheBloke/Etheria-55b-v0.1-GPTQ | 
| Model Name | Etheria 55B v0.1 | 
| Model Creator | Steel | 
| Base Model(s) | |
| Model Size | 55b | 
| Required VRAM | 29.2 GB | 
| Updated | 2025-09-23 | 
| Maintainer | TheBloke | 
| Model Type | llama | 
| Model Files | |
| GPTQ Quantization | Yes | 
| Quantization Type | gptq | 
| Model Architecture | LlamaForCausalLM | 
| Context Length | 200000 | 
| Model Max Length | 200000 | 
| Transformers Version | 4.37.0.dev0 | 
| Tokenizer Class | LlamaTokenizer | 
| Padding Token | <unk> | 
| Vocabulary Size | 64002 | 
| Torch Data Type | bfloat16 | 
Best Alternatives  | 
        Context / RAM  | 
        Downloads  | 
        Likes  | 
    
|---|---|---|---|
| SG Raccoon Yi 55B GPTQ | 4K / 29.2 GB | 7 | 1 | 
| ...theria 55B V0.1 3.0bpw H6 EXL2 | 195K / 21.9 GB | 3 | 2 | 
| ...theria 55B V0.1 3.5bpw H6 EXL2 | 195K / 25.3 GB | 0 | 1 | 
| Etheria 55B V0.1 | 195K / 111.2 GB | 61 | 9 | 
| Etheria 55B V0.1 | 195K / 111.2 GB | 7 | 10 | 
| Etheria 55B V0.1 AWQ | 195K / 30.2 GB | 10 | 1 | 
| SG Raccoon Yi 55B 200K 2.0 | 195K / 111.3 GB | 5 | 6 | 
| SG Raccoon Yi 55B 200K | 195K / 111.4 GB | 4 | 4 | 
| SG Raccoon Yi 55B AWQ | 4K / 30.2 GB | 6 | 2 | 
| SG Raccoon Yi 55B | 4K / 111.2 GB | 3 | 6 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐