| Model Type |
| ||||||
| Supported Languages |
| ||||||
| Training Details |
|
| LLM Name | HelpingAI2 6B |
| Repository ๐ค | https://huggingface.co/OEvortex/HelpingAI2-6B |
| Model Size | 6b |
| Required VRAM | 11.7 GB |
| Updated | 2024-12-05 |
| Maintainer | OEvortex |
| Model Type | llama |
| Model Files | |
| Supported Languages | en |
| GGUF Quantization | Yes |
| Quantization Type | q4|gguf|q4_k |
| Model Architecture | LlamaForCausalLM |
| License | other |
| Context Length | 8192 |
| Model Max Length | 8192 |
| Transformers Version | 4.43.3 |
| Tokenizer Class | PreTrainedTokenizerFast |
| Padding Token | <|end_of_text|> |
| Vocabulary Size | 128256 |
| Torch Data Type | float16 |
Model |
Likes |
Downloads |
VRAM |
|---|---|---|---|
| NSFW 6B | 32 | 39460 | 11 GB |
| HELVETE | 39 | 1112 | 11 GB |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| NSFW 6B | 8K / 11.7 GB | 39460 | 32 |
| HELVETE | 8K / 11.7 GB | 1112 | 39 |
| HelpingAI2 6B | 8K / 11.7 GB | 20 | 4 |
| Yi 1.5 6B Chat GGUF | 4K / 2.3 GB | 1235 | 4 |
| Yi 1.5 6B Chat IMat GGUF | 4K / 1.4 GB | 100 | 0 |
| Yi 1.5 6B Chat GGUF | 4K / 2.3 GB | 84 | 1 |
| Yi 6B Chat GGUF | 4K / 2.3 GB | 227 | 0 |
| ...i 6B 200K AEZAKMI V2 6bpw EXL2 | 195K / 4.9 GB | 4 | 3 |
| Yi 6B 200K 8.0bpw H8 EXL2 | 195K / 6.3 GB | 8 | 1 |
| Yi 6B 200K 6.0bpw H6 EXL2 | 195K / 4.9 GB | 4 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐