Yi 34B 200K AEZAKMI V2 LoRA by adamo1139

 ยป  All LLMs  ยป  adamo1139  ยป  Yi 34B 200K AEZAKMI V2 LoRA   URL Share it on

Yi 34B 200K AEZAKMI V2 LoRA is an open-source language model by adamo1139. Features: 34b LLM, VRAM: 0.5GB, License: apache-2.0, LLM Explorer Score: 0.11.

  4-bit   Bitsandbytes   Endpoints compatible   Llama   Lora   Region:us

Yi 34B 200K AEZAKMI V2 LoRA Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Yi 34B 200K AEZAKMI V2 LoRA (adamo1139/Yi-34B-200K-AEZAKMI-v2-LoRA)
๐ŸŒŸ Advertise your project ๐Ÿš€

Yi 34B 200K AEZAKMI V2 LoRA Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
chatbot development
Limitations:
Handling long system messages, Tendency to repeat responses, Unrestricted nature leading to inappropriate outputs
Considerations:
Use structured prompt format for optimal performance
Training Details 
Data Sources:
AEZAKMI v2 dataset
Methodology:
LoRA fine-tuning
Context Length:
200000
Training Time:
25 hours
Hardware Used:
Single local RTX 3090 Ti
Model Architecture:
HLF fine-tuned on AEZAKMI v2
Safety Evaluation 
Ethical Considerations:
Unrestricted-ness of v2 needs improvement
Input Output 
Input Format:
ChatML format
Accepted Modalities:
text
Output Format:
Standard response format with paragraph spacing
Performance Tips:
Set repetition penalty to ~1.05 and temperature to 1.2 for best results.
LLM NameYi 34B 200K AEZAKMI V2 LoRA
Repository ๐Ÿค—https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-v2-LoRA 
Model Size34b
Required VRAM0.5 GB
Updated2026-03-30
Maintaineradamo1139
Model Files  0.5 GB
Model ArchitectureAutoModelForCausalLM
Licenseapache-2.0
Model Max Length4096
Is Biasednone
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Moduleso_proj|k_proj|up_proj|v_proj|gate_proj|q_proj|down_proj
LoRA Alpha32
LoRA Dropout0.05
R Param16

Best Alternatives to Yi 34B 200K AEZAKMI V2 LoRA

Best Alternatives
Context / RAM
Downloads
Likes
...awrr1 LORA DPO Experimental R30K / 0.5 GB7131
Yi 34B Qlora E10K / 5.8 GB7650
Yi 34B AEZAKMI V1 LoRA0K / 0.5 GB51
... 34B Spicyboros 2 2 Run3 QLoRA0K / 0.5 GB11
Yi 34B Spicyboros 3.1 2 LoRA0K / 2 GB01
Yi 34B Spicyboros 3.1 LoRA0K / 2 GB94
Limarpv3 Yi Llama 34B Lora0K / 1 GB1010
Limarpv3 Yi Llama 34B Lora0K / 1 GB810
Yi 34B GiftedConvo0K / 5.8 GB22
Note: green Score (e.g. "73.2") means that the model is better than adamo1139/Yi-34B-200K-AEZAKMI-v2-LoRA.

Rank the Yi 34B 200K AEZAKMI V2 LoRA Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52392 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a