Manticore 13B Chat Pyg GGUF by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Manticore 13B Chat Pyg GGUF   URL Share it on

Base model:openaccess-ai-colle... Base model:quantized:openacces... Dataset:anon8231489123/sharegp... Dataset:ehartford/wizard vicun... Dataset:ehartford/wizardlm alp... Dataset:ewof/code-alpaca-instr...   Dataset:gsm8k   Dataset:hellaswag Dataset:metaeval/scienceqa tex... Dataset:openai/summarize from ...   Dataset:qingyisi/alpaca-cot   Dataset:riddle sense Dataset:teknium/gpt4-llm-clean... Dataset:teknium/gpteacher-gene...   En   Gguf   Instruct   Llama   Quantized   Region:us

Manticore 13B Chat Pyg GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Manticore 13B Chat Pyg GGUF (TheBloke/manticore-13b-chat-pyg-GGUF)
๐ŸŒŸ Advertise your project ๐Ÿš€

Manticore 13B Chat Pyg GGUF Parameters and Internals

Model Type 
llama
Additional Notes 
Manticore 13B Chat builds on Manticore with new datasets, including a de-duped subset of the Pygmalion dataset. It also removes all Alpaca style prompts using `###` in favor of chat only style prompts using `USER:`,`ASSISTANT:` as well as [pygmalion/metharme prompting](https://huggingface.co/PygmalionAI/metharme-7b#prompting) using `<|system|>, <|user|> and <|model|>` tokens.
Training Details 
Data Sources:
anon8231489123/ShareGPT_Vicuna_unfiltered, ehartford/wizard_vicuna_70k_unfiltered, ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered, QingyiSi/Alpaca-CoT, teknium/GPT4-LLM-Cleaned, teknium/GPTeacher-General-Instruct, metaeval/ScienceQA_text_only, hellaswag, openai/summarize_from_feedback, riddle_sense, gsm8k, ewof/code-alpaca-instruct-unfiltered
Training Time:
3 epochs
Hardware Used:
8xA100 80GB
Release Notes 
Notes:
https://wandb.ai/wing-lian/manticore-13b-v2/runs/hxr3aiiw
LLM NameManticore 13B Chat Pyg GGUF
Repository ๐Ÿค—https://huggingface.co/TheBloke/manticore-13b-chat-pyg-GGUF 
Model NameManticore 13B Chat Pyg
Model CreatorOpen Access AI Collective
Base Model(s)  Manticore 13B Chat Pyg   openaccess-ai-collective/manticore-13b-chat-pyg
Model Size13b
Required VRAM5.4 GB
Updated2025-09-23
MaintainerTheBloke
Model Typellama
Instruction-BasedYes
Model Files  5.4 GB   6.9 GB   6.3 GB   5.7 GB   7.4 GB   7.9 GB   7.4 GB   9.0 GB   9.2 GB   9.0 GB   10.7 GB   13.8 GB
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licenseother

Best Alternatives to Manticore 13B Chat Pyg GGUF

Best Alternatives
Context / RAM
Downloads
Likes
Codellama 7B Instruct GGUF0K / 2.8 GB6471
Llama 3 13B Instruct V0.1 GGUF0K / 5.1 GB10815
...aMa 3 Instruct Zeroed 13B GGUF0K / 5 GB1291
Codellama 13B Instruct GGUF0K / 13.8 GB400
CodeLlama 13B Instruct GGUF0K / 5.4 GB7208130
Finance LLM 13B GGUF0K / 4.8 GB78620
Medicine LLM 13B GGUF0K / 5.4 GB50916
Law LLM 13B GGUF0K / 5.4 GB5049
Pygmalion 2 13B GGUF0K / 5.4 GB472732
Swallow 13B Instruct GGUF0K / 5.5 GB3025
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/manticore-13b-chat-pyg-GGUF.

Rank the Manticore 13B Chat Pyg GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51535 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124