LLM Name | Dhanishtha |
Repository 🤗 | https://huggingface.co/AI4free/Dhanishtha |
Model Size | 1.8b |
Required VRAM | 3.5 GB |
Updated | 2025-02-21 |
Maintainer | AI4free |
Model Type | qwen2 |
Model Files | |
Model Architecture | Qwen2ForCausalLM |
Context Length | 131072 |
Model Max Length | 131072 |
Transformers Version | 4.48.3 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <|end▁of▁sentence|> |
Vocabulary Size | 151936 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
LaConfiance PRYMMAL ECE TW3 | 128K / 7.1 GB | 5 | 0 |
Qwen Fine Tuned V0 | 32K / 3.7 GB | 71 | 0 |
Qwen1.5 1.8B Chat | 32K / 3.7 GB | 53962 | 56 |
Qwen1.5 1.8B | 32K / 3.7 GB | 44285 | 51 |
MiniPLM Qwen 200M | 32K / 0.8 GB | 1188 | 6 |
Qwen1.5 1.8B Seed Sft | 32K / 3.7 GB | 11 | 0 |
Sailor 1.8B Chat | 32K / 3.7 GB | 860 | 5 |
Neural Chat Mini V2.2 1.8B | 32K / 3.7 GB | 14 | 6 |
Qwen1.5 Wukong 1.8B | 32K / 3.7 GB | 6 | 4 |
Orca 2.0 Tau 1.8B | 32K / 3.7 GB | 14 | 9 |
🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟