| Model Type |
| |||||||||
| Supported Languages |
| |||||||||
| Training Details |
|
| LLM Name | Qwen1.5 110B |
| Repository ๐ค | https://huggingface.co/Qwen/Qwen1.5-110B |
| Model Size | 110b |
| Required VRAM | 221.7 GB |
| Updated | 2025-10-05 |
| Maintainer | Qwen |
| Model Type | qwen2 |
| Model Files | |
| Supported Languages | en |
| Model Architecture | Qwen2ForCausalLM |
| License | other |
| Context Length | 32768 |
| Model Max Length | 32768 |
| Transformers Version | 4.37.0 |
| Tokenizer Class | Qwen2Tokenizer |
| Padding Token | <|endoftext|> |
| Vocabulary Size | 152064 |
| Torch Data Type | bfloat16 |
| Errors | replace |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Qwen1.5 110B Chat | 32K / 158.3 GB | 3469 | 130 |
| Dolphin 2.9.1 Qwen 110B | 32K / 193.4 GB | 19 | 28 |
| Dolphin 2.9.1 Qwen 110B | 32K / 193.4 GB | 15 | 28 |
| Airoboros DPO 110B 3.3 | 32K / 158.3 GB | 9 | 0 |
| Airoboros 110B 3.3 | 32K / 158.3 GB | 7 | 2 |
| Aqua Qwen 0.1 110B | 32K / 198.3 GB | 7 | 0 |
| ...wen 1.5 110B Layer Mix Bpw 2.2 | 8K / 40.7 GB | 5 | 1 |
| Qwen1.5 110B Chat 4bit | 32K / 62.2 GB | 4 | 5 |
| Qwen1.5 110B Chat 8bit | 32K / 179.8 GB | 8 | 1 |
| ...n1.5 110B Chat 3.35bpw H6 EXL2 | 32K / 49 GB | 8 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐