LLM Name | SmolLM2 360M Merged |
Repository ๐ค | https://huggingface.co/vonjack/SmolLM2-360M-Merged |
Base Model(s) | |
Merged Model | Yes |
Model Size | 360m |
Required VRAM | 0.7 GB |
Updated | 2025-09-10 |
Maintainer | vonjack |
Model Type | llama |
Instruction-Based | Yes |
Model Files | |
GGUF Quantization | Yes |
Quantization Type | q8|gguf |
Model Architecture | LlamaForCausalLM |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.46.0 |
Tokenizer Class | GPT2Tokenizer |
Vocabulary Size | 49152 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
SmolLM2 360M Instruct Bnb 4bit | 8K / 0.3 GB | 1319 | 1 |
SmolLM 360M Instruct 8bit | 2K / 0.4 GB | 17 | 2 |
SmolLM2 360M Instruct | 8K / 0.7 GB | 78262 | 137 |
SolaraV2 Coder 0511 | 8K / 0.7 GB | 1096 | 0 |
ProseFlow V1 360M Instruct | 8K / 0.7 GB | 7 | 0 |
SmolLM2 Rethink 360M | 8K / 1.4 GB | 14 | 1 |
BrainrotLM Assistant 362M | 8K / 0 GB | 12 | 0 |
...n Combined Instruction Dataset | 8K / 1.4 GB | 7 | 1 |
... Cpt Fineweb Norwegian Nynorsk | 8K / 1.4 GB | 6 | 0 |
SmolLM2 CoT 360M | 8K / 1.4 GB | 12 | 9 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐