LLM Name | Pretrained Llama3.2 3B NCKH |
Repository ๐ค | https://huggingface.co/baesad/Pretrained-Llama3.2-3B-NCKH |
Model Size | 3b |
Required VRAM | 7.2 GB |
Updated | 2025-07-14 |
Maintainer | baesad |
Model Type | llama |
Model Files | |
Quantization Type | 4bit |
Model Architecture | LlamaForCausalLM |
Context Length | 131072 |
Model Max Length | 131072 |
Transformers Version | 4.49.0 |
Tokenizer Class | PreTrainedTokenizer |
Padding Token | <|finetune_right_pad_id|> |
Vocabulary Size | 128256 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...ama Llama 3.2 3B Instruct FP16 | 128K / 6.5 GB | 1733434 | 0 |
...2 3B Instruct Unsloth Bnb 4bit | 128K / 2.4 GB | 98371 | 10 |
ReasoningCore 3B 0 | 128K / 6.5 GB | 246 | 2 |
Llama32 3B En Emo 2000 Stp | 128K / 2.2 GB | 13 | 0 |
Llama32 3B En Emo 300 Stp | 128K / 2.2 GB | 5 | 0 |
Llama32 3B En Emo 1000 Stp | 128K / 2.2 GB | 5 | 0 |
Llama32 3B En Emo 5000 Stp | 128K / 2.2 GB | 5 | 0 |
...eus 3B 0.1 Ft Unsloth Bnb 4bit | 128K / 2.5 GB | 7482 | 11 |
...ngCore 3B Instruct R01 Reflect | 128K / 6.5 GB | 0 | 1 |
....1 Llama 3.2 3B It Grpo 250404 | 128K / 6.5 GB | 841 | 60 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐