| LLM Name | Llama3 Finetune 4bit |
| Repository ๐ค | https://huggingface.co/Sreevadan/Llama3-finetune-4bit |
| Model Size | 4.6b |
| Required VRAM | 6.1 GB |
| Updated | 2024-07-04 |
| Maintainer | Sreevadan |
| Model Type | llama |
| Model Files | |
| Quantization Type | 4bit |
| Model Architecture | LlamaForCausalLM |
| License | apache-2.0 |
| Context Length | 8192 |
| Model Max Length | 8192 |
| Transformers Version | 4.41.2 |
| Tokenizer Class | PreTrainedTokenizerFast |
| Padding Token | <|reserved_special_token_250|> |
| Vocabulary Size | 128256 |
| Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Q4 Llama 3 | 8K / 6.1 GB | 12 | 0 |
| BNB 4Bit Llama3 Finetune | 8K / 6.1 GB | 17 | 0 |
| SimpleLlamaSentences | 2K / 0 GB | 6 | 0 |
| TinyLLama V0 | 2K / 0 GB | 17979 | 39 |
| UniversalNER TinyLLama | 2K / 0 GB | 6 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐