| Model Type | 
 | |
| Additional Notes | 
 | 
| LLM Name | GPT J 6B PNY GGML | 
| Repository ๐ค | https://huggingface.co/tekkithorse/GPT-J-6B-PNY-GGML | 
| Model Size | 6b | 
| Required VRAM | 4.5 GB | 
| Updated | 2025-09-23 | 
| Maintainer | tekkithorse | 
| Model Type | gptj | 
| Model Files | |
| GGML Quantization | Yes | 
| Quantization Type | ggml|q4 | 
| Model Architecture | GPTJForCausalLM | 
| Model Max Length | 2048 | 
| Transformers Version | 4.10.0.dev0 | 
| Tokenizer Class | GPT2Tokenizer | 
| Vocabulary Size | 50400 | 
| Torch Data Type | float16 | 
| Activation Function | gelu_new | 
| Best Alternatives | Context / RAM | Downloads | Likes | 
|---|---|---|---|
| Bertin GPT J 6B Alpaca | 0K / 24.4 GB | 352 | 7 | 
| Zenos GPT J 6B Instruct 4bit | 0K / 3.4 GB | 7 | 1 | 
| Model | 0K / 6.2 GB | 6 | 0 | 
| ...oduct NER GPT J 6B 4bit Merged | 0K / 2.5 GB | 12 | 0 | 
| ...nese Novel GPT J 6B F16 Marisa | 0K / 12.2 GB | 8 | 3 | 
| Kakaobrain Kogpt 6B 8bit | 0K / 6.7 GB | 11 | 2 | 
| Pygmalion 6b Dev 4bit 128g | 0K / 4 GB | 25 | 120 | 
| GPT J 6B Skein 4bit 128g | 0K / 4 GB | 17 | 1 | 
| GPT J 6B Alpaca Gpt4 | 0K / 24.3 GB | 10 | 20 | 
| Pygmalion 6B 4bit 128g | 0K / 4 GB | 7 | 3 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐