Falcon 7B DPO Lora is an open-source language model by Mastane. Features: 7b LLM, VRAM: 0.5GB, License: apache-2.0, LLM Explorer Score: 0.11.
| LLM Name | Falcon 7B DPO Lora |
| Repository ๐ค | https://huggingface.co/Mastane/falcon-7b-dpo-lora |
| Base Model(s) | |
| Model Size | 7b |
| Required VRAM | 0.5 GB |
| Updated | 2026-03-29 |
| Maintainer | Mastane |
| Model Files | |
| Model Architecture | AutoModelForCausalLM |
| License | apache-2.0 |
| Model Max Length | 2048 |
| Is Biased | none |
| Tokenizer Class | PreTrainedTokenizerFast |
| Padding Token | <|endoftext|> |
| PEFT Type | LORA |
| LoRA Model | Yes |
| PEFT Target Modules | dense|query_key_value|dense_h_to_4h|dense_4h_to_h |
| LoRA Alpha | 16 |
| LoRA Dropout | 0.1 |
| R Param | 64 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Mistral 7B V0.3 | 32K / 14.5 GB | 1253 | 5 |
| Mistral 7B Instruct V0.3 | 32K / 14.5 GB | 39 | 5 |
| ...unoichi Lemon Royale V3 32K 7B | 32K / 14.5 GB | 2 | 5 |
| Mistral 7B Instruct V0.3 | 32K / 14.5 GB | 513 | 3 |
| Mistralai Mistral 7B V0.3 | 32K / 14.5 GB | 37 | 3 |
| ...ralai Mistral 7B Instruct V0.3 | 32K / 14.5 GB | 5 | 3 |
| Mistral 7B Instruct V0.3 | 32K / 14.5 GB | 5 | 0 |
| Mistral 7B V0.2 | 32K / 14.5 GB | 54 | 1 |
| Full V4 Astromistral Final | 32K / 4.5 GB | 1 | 1 |
| LlamaGuard 7B | 4K / 13.5 GB | 4073 | 239 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐