| Model Type |
| |||||||||
| Training Details |
| |||||||||
| Input Output |
|
| LLM Name | Pythia 70M Instruct Orca Chkpt 64000 |
| Repository ๐ค | https://huggingface.co/w601sxs/pythia-70m-instruct-orca-chkpt-64000 |
| Model Size | 70m |
| Required VRAM | 0.2 GB |
| Updated | 2025-11-14 |
| Maintainer | w601sxs |
| Instruction-Based | Yes |
| Model Files | |
| Model Architecture | AutoModelForCausalLM |
| Is Biased | none |
| PEFT Type | LORA |
| LoRA Model | Yes |
| PEFT Target Modules | query_key_value |
| LoRA Alpha | 32 |
| LoRA Dropout | 0.05 |
| R Param | 8 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Llama 3.1 Tango 70B | 0K / 3 GB | 3 | 8 |
| ...3 70B Instruct Uncensored Lora | 0K / 0.8 GB | 7 | 3 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐