| Model Type | 
 | ||||||
| Use Cases | 
 | ||||||
| Supported Languages | 
 | 
| LLM Name | Rohit Sharma | 
| Repository ๐ค | https://huggingface.co/CharacterEcho/Rohit-Sharma | 
| Model Size | 3b | 
| Required VRAM | 10.6 GB | 
| Updated | 2025-10-21 | 
| Maintainer | CharacterEcho | 
| Model Type | stablelm | 
| Model Files | |
| GGUF Quantization | Yes | 
| Quantization Type | gguf|q5|q5_k | 
| Model Architecture | StableLmForCausalLM | 
| License | apache-2.0 | 
| Context Length | 4096 | 
| Model Max Length | 4096 | 
| Transformers Version | 4.42.3 | 
| Tokenizer Class | GPTNeoXTokenizer | 
| Padding Token | <|im_end|> | 
| Vocabulary Size | 50280 | 
| Torch Data Type | float16 | 
| Best Alternatives | Context / RAM | Downloads | Likes | 
|---|---|---|---|
| Stable Code 3B | 16K / 5.6 GB | 7627 | 655 | 
| Stable Cypher Instruct 3B | 16K / 1.7 GB | 1158 | 28 | 
| Stable Code Instruct 3B | 16K / 5.6 GB | 1660 | 177 | 
| Stable Cypher Instruct 3B | 16K / 1.7 GB | 263 | 26 | 
| ...struct 3B Spider 1500 Steps Q4 | 16K / 1.7 GB | 8 | 0 | 
| NSFW 3B | 4K / 10.6 GB | 15664 | 194 | 
| HELVETE 3B | 4K / 10.6 GB | 14047 | 195 | 
| HelpingAI 3B Chat | 4K / 10.6 GB | 70 | 3 | 
| Stable Code Instruct 3B 4bit | 16K / 1.8 GB | 41 | 5 | 
| Stable Code 3B 4bit | 16K / 1.8 GB | 12 | 1 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐