LLM Name | Qwen3 0p6 Bf16 600 |
Repository ๐ค | https://huggingface.co/moyixiao/qwen3_0p6_bf16_600 |
Model Size | 596m |
Required VRAM | 1.2 GB |
Updated | 2025-07-19 |
Maintainer | moyixiao |
Model Type | qwen3 |
Model Files | |
Model Architecture | Qwen3ForCausalLM |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.52.4 |
Tokenizer Class | Qwen2Tokenizer |
Padding Token | <|endoftext|> |
Vocabulary Size | 151936 |
Torch Data Type | bfloat16 |
Errors | replace |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Sam Reason A1 | 40K / 2.4 GB | 48 | 1 |
SFT Nochat FULL DATA | 32K / 1.2 GB | 711 | 0 |
Sft Scp Epoch1 | 32K / 1.2 GB | 793 | 0 |
MNLP M3 Mcqa Model | 32K / 1.2 GB | 44 | 0 |
...DPO Model Smoltalk Bigger Test | 32K / 1.2 GB | 45 | 0 |
MNLP M3 Mcqa Model | 32K / 1.2 GB | 33 | 0 |
MNLP M3 DPO Model | 32K / 1.2 GB | 27 | 0 |
Full Dataset Instruction V4 | 32K / 1.2 GB | 24 | 0 |
MNLP M3 DPO Model Smoltalk All | 32K / 1.2 GB | 20 | 0 |
Qwen Sft Smoltalk | 32K / 1.2 GB | 26 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐