Dhanishtha 2.0 Preview by HelpingAI

 ยป  All LLMs  ยป  HelpingAI  ยป  Dhanishtha 2.0 Preview   URL Share it on

  Am   Ar   Autotrain compatible Base model:finetune:qwen/qwen3...   Base model:qwen/qwen3-14b-base   Bilingual   Bn   Conversational   Da Dataset:abhaykoul/dhanishtha-2... Dataset:abhaykoul/dhanishtha-2... Dataset:abhaykoul/dhanishtha-r... Dataset:open-thoughts/openthou...   De   En   Endpoints compatible   Es   Fi   Fr   Gu   He   Hi   Id   Intermediate-thinking   It   Ja   Kn   Ko   Ml   Mr   Ms   Multilingual   Ne   Nl   No   Or   Pa   Pl   Pt   Qwen3   Reasoning   Region:us   Ru   Safetensors   Sharded   Sv   Sw   Ta   Te   Tensorflow   Th   Tl   Tr   Ur   Vi   Yo   Zh   Zu

Dhanishtha 2.0 Preview Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Dhanishtha 2.0 Preview (HelpingAI/Dhanishtha-2.0-preview)
๐ŸŒŸ Advertise your project ๐Ÿš€

Dhanishtha 2.0 Preview Parameters and Internals

LLM NameDhanishtha 2.0 Preview
Repository ๐Ÿค—https://huggingface.co/HelpingAI/Dhanishtha-2.0-preview 
Base Model(s)  Qwen3 14B Base   Qwen/Qwen3-14B-Base
Model Size14b
Required VRAM29.5 GB
Updated2025-08-26
MaintainerHelpingAI
Model Typeqwen3
Model Files  5.0 GB: 1-of-6   5.0 GB: 2-of-6   4.9 GB: 3-of-6   5.0 GB: 4-of-6   4.9 GB: 5-of-6   4.7 GB: 6-of-6
Supported Languagesen hi zh es fr de ja ko ar pt ru it nl tr pl sv da no fi he th vi id ms tl sw yo zu am bn gu kn ml mr ne or pa ta te ur
Model ArchitectureQwen3ForCausalLM
Licenseapache-2.0
Context Length40960
Model Max Length40960
Transformers Version4.52.4
Tokenizer ClassQwen2Tokenizer
Padding Token<|vision_pad|>
Vocabulary Size151936
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Dhanishtha 2.0 Preview

Best Alternatives
Context / RAM
Downloads
Likes
SimpleChat 14B V1195K / 29.5 GB302
...0528DistillQwen 14B V27.3 200K195K / 29.5 GB154
...uct 21B Brainstorm20x 128K Ctx128K / 84.1 GB290
Qwen3 14B40K / 29.7 GB956363251
Qwen3 14B FP840K / 16.4 GB6845631
Qwen3 14B40K / 29.5 GB224319
QiMing Pantheon Qwen3 14B40K / 29.2 GB551
MiroThinker 14B DPO V0.140K / 29.7 GB10013
Qwen3 14B DarkFusion40K / 29.5 GB330
MiroThinker 14B SFT V0.140K / 29.7 GB439
Note: green Score (e.g. "73.2") means that the model is better than HelpingAI/Dhanishtha-2.0-preview.

Rank the Dhanishtha 2.0 Preview Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50900 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124