| Model Type |
|
| LLM Name | Samantha 1.11 CodeLlama 34B GGUF |
| Repository ๐ค | https://huggingface.co/second-state/Samantha-1.11-CodeLlama-34B-GGUF |
| Model Name | Samantha 1.11 CodeLlama 34B |
| Model Creator | Eric Hartford |
| Base Model(s) | |
| Model Size | 34b |
| Required VRAM | 12.5 GB |
| Updated | 2025-09-23 |
| Maintainer | second-state |
| Model Type | llama |
| Model Files | |
| Supported Languages | en |
| GGUF Quantization | Yes |
| Quantization Type | gguf|q2|q4_k|q5_k |
| Generates Code | Yes |
| Model Architecture | LlamaForCausalLM |
| License | llama2 |
| Context Length | 2048 |
| Model Max Length | 2048 |
| Transformers Version | 4.32.0.dev0 |
| Vocabulary Size | 32000 |
| Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| CodeLlama 34B Instruct Fp16 | 16K / 67.5 GB | 2819 | 6 |
| CodeLlama 34B Python Fp16 | 16K / 67.5 GB | 1859 | 13 |
| Codellama Extraction | 16K / 67.6 GB | 5 | 0 |
| Phind Codellama 34B V2 EXL2 | 16K / GB | 6 | 16 |
| Codellama 34B Bnb 4bit | 16K / 18.2 GB | 1087 | 4 |
| CodeLlama 34B Instruct Hf 4bit | 16K / 19.4 GB | 38 | 2 |
| CodeLlama 34B Fp16 | 16K / 67.5 GB | 7 | 4 |
| XwinCoder 34B 4.0bpw H6 EXL2 | 16K / 17.4 GB | 6 | 1 |
| ...Codellama 34B V2 Megacode EXL2 | 16K / GB | 7 | 10 |
| ...gpt 32K Codellama 34B Instruct | 32K / 67.5 GB | 89 | 2 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐