Saiga 7B Lora by IlyaGusev

 ยป  All LLMs  ยป  IlyaGusev  ยป  Saiga 7B Lora   URL Share it on

Saiga 7B Lora is an open-source language model by IlyaGusev. Features: 7b LLM, VRAM: 0.1GB, License: cc-by-4.0, Instruction-Based, LLM Explorer Score: 0.07.

  Adapter Dataset:ilyagusev/oasst1 ru ma... Dataset:ilyagusev/ru sharegpt ... Dataset:ilyagusev/ru turbo alp... Dataset:ilyagusev/ru turbo alp... Dataset:ilyagusev/ru turbo sai...   Dataset:lksy/ru instruct gpt4   Finetuned   Instruct   Lora   Region:us   Ru
Model Card on HF ๐Ÿค—: https://huggingface.co/IlyaGusev/saiga_7b_lora 

Saiga 7b Lora Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Saiga 7B Lora (IlyaGusev/saiga_7b_lora)
๐ŸŒŸ Advertise your project ๐Ÿš€

Saiga 7B Lora Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
text generation, chatbots
Primary Use Cases:
Conversational AI, Question Answering
Supported Languages 
ru (fluently)
Training Details 
Data Sources:
ru_turbo_alpaca, ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_turbo_alpaca_evol_instruct, ru_instruct_gpt4
Context Length:
2000
Model Architecture:
LLaMA 7B-based adapter-only
Input Output 
Input Format:
Conversations with roles: system, user, bot
Accepted Modalities:
text
Output Format:
Generated responses in text
Release Notes 
Version:
v5
Notes:
Added dataset 'gpt_roleplay_realm', improved dataset merging.
Version:
v4
Notes:
Added support for custom system prompts.
Version:
v3
Notes:
Initial release with basic conversational capabilities.
Version:
v2
Notes:
Reduced loss, enhanced context handling.
Version:
v1
Notes:
Initial training complete with base datasets.
LLM NameSaiga 7b Lora
Repository ๐Ÿค—https://huggingface.co/IlyaGusev/saiga_7b_lora 
Model Size7b
Required VRAM0.1 GB
Updated2025-10-26
MaintainerIlyaGusev
Instruction-BasedYes
Model Files  0.1 GB
Supported Languagesru
Model ArchitectureAdapter
Licensecc-by-4.0
Model Max Length2048
Is Biasednone
Tokenizer ClassLlamaTokenizer
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesq_proj|gate_proj|up_proj|o_proj|v_proj|k_proj|down_proj
LoRA Alpha16
LoRA Dropout0.05
R Param16

Best Alternatives to Saiga 7B Lora

Best Alternatives
Context / RAM
Downloads
Likes
Qwen Megumin0K / 0.1 GB31
Deepthink Reasoning Adapter0K / 0.2 GB33
Mistral 7B Instruct Sa V0.10K / 0 GB50
Qwen2.5 7b NotesCorrector0K / 0.6 GB100
Mistral 7B Selfplay V00K / 0.2 GB60
Mistral 7B V2 Selfplay0K / 0.2 GB100
...82 6142 45d8 9455 Bc68ca4866eb0K / 1.2 GB50
...Sql Flash Attention 2 Dataeval0K / 1.9 GB73
Text To Rule Mistral 20K / 0.3 GB50
...al 7B Instruct V0.3 17193012560K / 0.9 GB90
Note: green Score (e.g. "73.2") means that the model is better than IlyaGusev/saiga_7b_lora.

Rank the Saiga 7B Lora Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52583 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a