LORA DeepSeek R1 Distill Llama 8B Rank 128 INSTRUCT Adapter by DavidAU

 »  All LLMs  »  DavidAU  »  LORA DeepSeek R1 Distill Llama 8B Rank 128 INSTRUCT Adapter   URL Share it on

LORA DeepSeek R1 Distill Llama 8B Rank 128 INSTRUCT Adapter is an open-source language model by DavidAU. Features: 8b LLM, VRAM: 0.7GB, License: apache-2.0, Instruction-Based, LLM Explorer Score: 0.18.

  128k context   Adapter Base model:adapter:deepseek-ai... Base model:deepseek-ai/deepsee...   Brainstorming   Deepseek   En   Finetuned   General usage   Instruct   Llama 3 lora   Llama 3.1 lora   Lora   Lora adapter   Mergekit   Peft   Problem solving   Reasoning   Region:us   Safetensors   Solve riddles   Thinking

LORA DeepSeek R1 Distill Llama 8B Rank 128 INSTRUCT Adapter Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

LORA DeepSeek R1 Distill Llama 8B Rank 128 INSTRUCT Adapter Parameters and Internals

LLM NameLORA DeepSeek R1 Distill Llama 8B Rank 128 INSTRUCT Adapter
Repository 🤗https://huggingface.co/DavidAU/LORA-DeepSeek-R1-Distill-Llama-8B-rank-128-INSTRUCT-adapter 
Base Model(s)  DeepSeek R1 Distill Llama 8B   deepseek-ai/DeepSeek-R1-Distill-Llama-8B
Model Size8b
Required VRAM0.7 GB
Updated2025-09-23
MaintainerDavidAU
Instruction-BasedYes
Model Files  0.7 GB
Supported Languagesen
Model ArchitectureAdapter
Licenseapache-2.0
Is Biasednone
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesk_proj|o_proj|v_proj|lm_head|q_proj|gate_proj|down_proj|up_proj|embed_tokens
LoRA Alpha128
LoRA Dropout0
R Param128

Best Alternatives to LORA DeepSeek R1 Distill Llama 8B Rank 128 INSTRUCT Adapter

Best Alternatives
Context / RAM
Downloads
Likes
... 3 8B Instruct Bvr Finetune V38K / 16.1 GB70
Flippa V60K / 0 GB831
...B Instruct DPO 0R100L PoliTune0K / 16.1 GB60
...B Lora Rag Citation Generation0K / 0 GB104
...lama 3 1 8B Instruct Orca ORPO0K / 0.1 GB82
Vortex20K / 4.4 GB80
Llama3 8B Instruct Code0K / 0.2 GB41
...Llama 3 8B Instruct ORPO QLoRA0K / 0.7 GB560
Llama 3 8B Claudstruct V30K / 0.1 GB50
Llama 3 8B Claudstruct V10K / 0.1 GB50
Note: green Score (e.g. "73.2") means that the model is better than DavidAU/LORA-DeepSeek-R1-Distill-Llama-8B-rank-128-INSTRUCT-adapter.

Rank the LORA DeepSeek R1 Distill Llama 8B Rank 128 INSTRUCT Adapter Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 53232 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a