Guanaco 13B Merged 8bit by Monero

 ยป  All LLMs  ยป  Monero  ยป  Guanaco 13B Merged 8bit   URL Share it on

  Merged Model   8bit   Autotrain compatible Dataset:josephuscheung/guanaco... Dataset:timdettmers/guanaco-13...   Endpoints compatible   Llama   Quantized   Region:us

Guanaco 13B Merged 8bit Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Guanaco 13B Merged 8bit (Monero/Guanaco-13b-Merged-8bit)
๐ŸŒŸ Advertise your project ๐Ÿš€

Guanaco 13B Merged 8bit Parameters and Internals

Model Type 
text generation, multimodal
Use Cases 
Areas:
research, commercial applications
Applications:
multilingual conversational agents, multimodal chatbots
Primary Use Cases:
instruction-following tasks, role-playing support
Limitations:
Unfiltered for harmful content, Potential inaccuracies in knowledge-based content
Considerations:
Ensure information from the model is cross-verified with reliable sources.
Additional Notes 
While the model supports multimodal VQA (visual question answering), accurate, source-confirmed inputs are recommended for knowledge-based queries.
Supported Languages 
languages (English, Simplified Chinese, Traditional Chinese (Taiwan), Traditional Chinese (Hong Kong), Japanese, Deutsch), proficiency (Advanced)
Training Details 
Data Sources:
timdettmers/guanaco-13b, JosephusCheung/GuanacoDataset
Data Volume:
534,530 entries
Methodology:
Integration with Alpaca dataset
Model Architecture:
LoRa merged with LLaMA 13B
Safety Evaluation 
Risk Categories:
misinformation, bias
Ethical Considerations:
Outputs may not adhere to ethical norms.
Responsible Ai Considerations 
Fairness:
Unfiltered outputs may include biased content.
Transparency:
Publicly accessible dataset and model weights.
Accountability:
Users are responsible for cross-verifying factual information.
Mitigation Strategies:
Encouragement to verify information from reliable sources.
Input Output 
Input Format:
Structured format with System, User, and Assistant roles, akin to ChatGPT format.
Accepted Modalities:
text, image
Output Format:
Multimodal responses with image and text.
Performance Tips:
Utilize role-play and context continuity features for enhanced interaction.
LLM NameGuanaco 13B Merged 8bit
Repository ๐Ÿค—https://huggingface.co/Monero/Guanaco-13b-Merged-8bit 
Base Model(s)  Guanaco 13B Merged   timdettmers/guanaco-13b-merged
Merged ModelYes
Model Size13b
Required VRAM13.4 GB
Updated2025-09-14
MaintainerMonero
Model Typellama
Model Files  13.4 GB
Quantization Type8bit
Model ArchitectureLlamaForCausalLM
Context Length2048
Model Max Length2048
Transformers Version4.28.0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Guanaco 13B Merged 8bit

Best Alternatives
Context / RAM
Downloads
Likes
Llama13b 32K Illumeet Finetune32K / 26 GB90
...Maid V3 13B 32K 6.0bpw H6 EXL232K / 10 GB51
...Maid V3 13B 32K 8.0bpw H8 EXL232K / 13.2 GB51
WhiteRabbitNeo 13B V116K / 26 GB2627428
CodeLlama 13B Python Fp1616K / 26 GB263525
CodeLlama 13B Instruct Fp1616K / 26 GB265028
...Llama 13B Instruct Hf 4bit MLX16K / 7.8 GB12522
CodeLlama 13B Fp1616K / 26 GB566
Airophin 13B Pntk 16K Fp1616K / 26 GB16574
Codellama 13B Bnb 4bit16K / 7.2 GB425
Note: green Score (e.g. "73.2") means that the model is better than Monero/Guanaco-13b-Merged-8bit.

Rank the Guanaco 13B Merged 8bit Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51368 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124