LLM Name | MPT 7B Instruct QLora 8Bits Peft Train Eli5 1 Epoch V7 |
Repository ๐ค | https://huggingface.co/NickyNicky/MPT-7b-instruct-QLora-8Bits-Peft-train_eli5-1_Epoch-V7 |
Model Size | 7b |
Required VRAM | 0 GB |
Updated | 2025-02-09 |
Maintainer | NickyNicky |
Instruction-Based | Yes |
Model Files | |
Model Architecture | Adapter |
Is Biased | none |
Tokenizer Class | GPTNeoXTokenizer |
PEFT Type | LORA |
LoRA Model | Yes |
PEFT Target Modules | Wqkv |
LoRA Alpha | 32 |
LoRA Dropout | 0.05 |
R Param | 8 |
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
MPT 7B Instruct GGML | 30 | 23 | 3 GB |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Qwen Megumin | 0K / 0.1 GB | 1 | 1 |
Deepthink Reasoning Adapter | 0K / 0.2 GB | 11 | 2 |
Mistral 7B Instruct Sa V0.1 | 0K / 0 GB | 5 | 0 |
Qwen2.5 7b NotesCorrector | 0K / 0.6 GB | 10 | 0 |
...82 6142 45d8 9455 Bc68ca4866eb | 0K / 1.2 GB | 6 | 0 |
...al 7B Instruct V0.3 1719301256 | 0K / 0.9 GB | 12 | 0 |
Text To Rule Mistral 2 | 0K / 0.3 GB | 5 | 0 |
...al 7B Instruct V0.3 1719246505 | 0K / 0 GB | 16 | 0 |
...al 7B Instruct V0.3 1719297750 | 0K / 0.4 GB | 5 | 0 |
Text To Rule Mistral | 0K / 0.4 GB | 5 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐