LLM Name | Beef 12B |
Repository ๐ค | https://huggingface.co/hardlyworking/Beef-12B |
Base Model(s) | |
Merged Model | Yes |
Model Size | 12b |
Required VRAM | 3.6 GB |
Updated | 2025-05-21 |
Maintainer | hardlyworking |
Model Files | |
Model Architecture | Adapter |
Is Biased | none |
Tokenizer Class | PreTrainedTokenizer |
Padding Token | <pad> |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | o_proj|down_proj|gate_proj|up_proj|k_proj|q_proj|v_proj |
LoRA Alpha | 16 |
LoRA Dropout | 0.05 |
R Param | 128 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Noodles 12B | 0K / 3.6 GB | 11 | 0 |
G3 12B Pt Story Qlora | 0K / 0.5 GB | 14 | 0 |
CEV Other 12B Lora | 0K / 0.1 GB | 5 | 0 |
Meme 12B Lora E2 | 0K / 0.2 GB | 8 | 0 |
Meme 12B Lora E2 | 0K / 0.2 GB | 5 | 0 |
Control 12B R3 Lora | 0K / 2.9 GB | 6 | 0 |
...Noctis R64 Test Train Lora 12B | 0K / 0.9 GB | 7 | 0 |
...Noctis R64 Test Train Lora 12B | 0K / 0.9 GB | 5 | 0 |
Aurora 12B | 0K / 24.5 GB | 8 | 0 |
Aura NeMo 12B | 0K / 1.8 GB | 6 | 2 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐