Pygmalion 2 13B SuperCOT Weighed GGUF by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Pygmalion 2 13B SuperCOT Weighed GGUF   URL Share it on

  En   Gguf   Llama   Llama2   Quantized   Region:us

Pygmalion 2 13B SuperCOT Weighed GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Pygmalion 2 13B SuperCOT Weighed GGUF (TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF)
๐ŸŒŸ Advertise your project ๐Ÿš€

Pygmalion 2 13B SuperCOT Weighed GGUF Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
research, applications involving text generation
Limitations:
Not intended for supplying factual information or advice in any form., The model will show biases similar to those observed in niche roleplaying forums on the Internet.
Considerations:
Since this is an experimental weight merge between Pygmalion-2 and SuperCOT, different instruction formats like Metharme and Alpaca are recommended.
Additional Notes 
Pygmalion 2 13B SuperCOT Weighed model is part of the GGUF format files supporting better tokenization and special tokens.
Training Details 
Methodology:
This is an experimental weighted merge between Pygmalion 2 13b and Ausboss's Llama2 SuperCOT loras using a gradient merge script from zaraki-tools.
Input Output 
Input Format:
Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response:
LLM NamePygmalion 2 13B SuperCOT Weighed GGUF
Repository ๐Ÿค—https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF 
Model NamePygmalion 2 13B SuperCOT Weighed
Model Creatorroyallab
Base Model(s)  royallab/Pygmalion-2-13b-SuperCoT-weighed   royallab/Pygmalion-2-13b-SuperCoT-weighed
Model Size13b
Required VRAM5.4 GB
Updated2025-09-23
MaintainerTheBloke
Model Typellama
Model Files  5.4 GB   6.9 GB   6.3 GB   5.7 GB   7.4 GB   7.9 GB   7.4 GB   9.0 GB   9.2 GB   9.0 GB   10.7 GB   13.8 GB
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licensellama2

Best Alternatives to Pygmalion 2 13B SuperCOT Weighed GGUF

Best Alternatives
Context / RAM
Downloads
Likes
MythoMax L2 13B GGUF0K / 5.4 GB56944194
Llama 3 13B Instruct V0.1 GGUF0K / 5.1 GB10375
Hermes 2 Pro Llama 3 13B GGUF0K / 4.6 GB1830
Llama 2 13B Chat GGUF0K / 5.4 GB17331204
...aMa 3 Instruct Zeroed 13B GGUF0K / 5 GB1291
LLaMa 3 Base Zeroed 13B GGUF0K / 5 GB1261
Llama3 13B Ku GGUF0K / 8.7 GB1190
EstopianMaid 13B GGUF0K / 4.8 GB248354
Model10K / 13.8 GB50
Codellama 7B Instruct GGUF0K / 2.8 GB2011
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF.

Rank the Pygmalion 2 13B SuperCOT Weighed GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51543 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124