Guanaco 7B Leh V2 by KBlueLeaf

 ยป  All LLMs  ยป  KBlueLeaf  ยป  Guanaco 7B Leh V2   URL Share it on

  Alpaca   Autotrain compatible Dataset:josephuscheung/guanaco...   Dataset:yahma/alpaca-cleaned   En   Endpoints compatible   Finetuned   Guanaco   Ja   Llama   Lora   Pytorch   Region:us   Safetensors   Sharded   Tensorflow   Zh

Guanaco 7B Leh V2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Guanaco 7B Leh V2 (KBlueLeaf/guanaco-7b-leh-v2)
๐ŸŒŸ Advertise your project ๐Ÿš€

Guanaco 7B Leh V2 Parameters and Internals

Model Type 
multilingual, instruction-following
Use Cases 
Areas:
chatbots, multilingual systems
Applications:
enhanced Chinese and Japanese language understanding and generation, instruction-based tasks
Primary Use Cases:
chat-based applications
Considerations:
The model is prone to fluently generating confusing content, and adjustments in input may be necessary to mitigate.
Additional Notes 
The model uses a modified prompt format to achieve better performance in multilingual tasks and chatbot functionalities.
Supported Languages 
en (fluency and accuracy possibly high), zh (improved over original LLaMA), ja (improved over original LLaMA)
Training Details 
Data Sources:
JosephusCheung/GuanacoDataset, yahma/alpaca-cleaned
Data Volume:
540k entries
Methodology:
trained with lora, embed_tokens, lm_head on bf16, with batch size increased and longer context length. Training emphasized on output with loss masking and group of len enabled.
Context Length:
1024
Training Time:
2 epochs (~8400 steps)
Hardware Used:
2x3090 GPUs
Input Output 
Input Format:
Instruction-based prompt format as detailed with specific sections for Instruction, Input, User, System, and Response.
Accepted Modalities:
text
Output Format:
Multilingual text output.
Performance Tips:
Remove the first line of the original prompt to reduce token consumption.
LLM NameGuanaco 7B Leh V2
Repository ๐Ÿค—https://huggingface.co/KBlueLeaf/guanaco-7b-leh-v2 
Model Size7b
Required VRAM13.6 GB
Updated2025-08-31
MaintainerKBlueLeaf
Model Typellama
Model Files  4.0 GB: 1-of-4   4.0 GB: 2-of-4   4.0 GB: 3-of-4   1.6 GB: 4-of-4   4.0 GB: 1-of-4   4.0 GB: 2-of-4   4.0 GB: 3-of-4   1.6 GB: 4-of-4
Supported Languagesen zh ja
Model ArchitectureLlamaForCausalLM
Licensegpl-3.0
Transformers Version4.28.0.dev0
Tokenizer ClassLLaMATokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Guanaco 7B Leh V2

Best Alternatives
Context / RAM
Downloads
Likes
A6 L1024K / 16.1 GB2010
A3.41024K / 16.1 GB130
A5.41024K / 16.1 GB120
A2.41024K / 16.1 GB120
M1024K / 16.1 GB1270
1571024K / 16.1 GB1010
1241024K / 16.1 GB930
1621024K / 16.1 GB600
2 Very Sci Fi1024K / 16.1 GB3170
1181024K / 16.1 GB150
Note: green Score (e.g. "73.2") means that the model is better than KBlueLeaf/guanaco-7b-leh-v2.

Rank the Guanaco 7B Leh V2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51022 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124