Llama 3 Tiny Random Gptq W4 is an open-source language model by yujiepan. Features: 1m LLM, Context: 8K, Quantized, LLM Explorer Score: 0.13.
| Model Type |
| ||||||
| Additional Notes |
| ||||||
| Input Output |
|
| LLM Name | Llama 3 Tiny Random Gptq W4 |
| Repository ๐ค | https://huggingface.co/yujiepan/llama-3-tiny-random-gptq-w4 |
| Model Size | 1m |
| Required VRAM | 0 GB |
| Updated | 2026-04-03 |
| Maintainer | yujiepan |
| Model Type | llama |
| Model Files | |
| GPTQ Quantization | Yes |
| Quantization Type | gptq |
| Model Architecture | LlamaForCausalLM |
| Context Length | 8192 |
| Model Max Length | 8192 |
| Transformers Version | 4.38.2 |
| Tokenizer Class | PreTrainedTokenizerFast |
| Vocabulary Size | 128256 |
| Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| LWM Text Chat 1M GPTQ | 1024K / 4.3 GB | 12 | 1 |
| ...ma 3.2 1B Instruct XMADai 4bit | 128K / 1.5 GB | 4 | 5 |
| ...inyllama 2 1B Miniguanaco GPTQ | 2K / 0.8 GB | 19 | 1 |
| Llama3.2 1B HAREM | 128K / 2.5 GB | 181 | 0 |
| CodeNexus | 128K / 2.5 GB | 65 | 0 |
| ...2 1B Instruct Unsloth Bnb 4bit | 128K / 1.1 GB | 101952 | 4 |
| Llama 3.2 1B Instruct 4bit | 128K / 0.7 GB | 122614 | 19 |
| Llama 3.2 1B Unsloth Bnb 4bit | 128K / 1.1 GB | 11108 | 2 |
| Llama 3.2 1B Instruct Bnb 4bit | 128K / 1 GB | 30686 | 22 |
| Llama 3.2 1B Bnb 4bit | 128K / 1 GB | 17417 | 17 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐