Minotaur 13B Fixed GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Minotaur 13B Fixed GPTQ   URL Share it on

  4-bit   Autotrain compatible   Axolotl Base model:openaccess-ai-colle... Base model:quantized:openacces...   Dataset:camel-ai/biology   Dataset:camel-ai/chemistry   Dataset:camel-ai/math   Dataset:camel-ai/physics Dataset:ehartford/wizardlm alp...   Dataset:gsm8k   Dataset:hellaswag Dataset:metaeval/scienceqa tex... Dataset:openai/summarize from ...   Dataset:qingyisi/alpaca-cot   Dataset:riddle sense Dataset:teknium/gpteacher-gene...   Dataset:winglian/evals   Gptq   Instruct   Llama   Mpt   Openaccess ai collective   Quantized   Region:us   Safetensors

Minotaur 13B Fixed GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Minotaur 13B Fixed GPTQ (TheBloke/minotaur-13B-fixed-GPTQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

Minotaur 13B Fixed GPTQ Parameters and Internals

Model Type 
llama
Additional Notes 
Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering like ChatGPT, so it can produce problematic outputs. Built with Axolotl.
Training Details 
Data Sources:
ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered, QingyiSi/Alpaca-CoT, teknium/GPTeacher-General-Instruct, metaeval/ScienceQA_text_only, openai/summarize_from_feedback, camel-ai/math, camel-ai/physics, camel-ai/chemistry, camel-ai/biology, winglian/evals, hellaswag, riddle_sense, gsm8k
Methodology:
Fine-tuning on open datasets
Training Time:
7.5 hours on 6XA100 80GB
Hardware Used:
6XA100 80GB
Model Architecture:
Built on top of LLaMA-13B
Release Notes 
Notes:
Due to a bug, the initial release of Minotaur 13B dropped a few datasets during training. We have corrected the issue and this is the retrained model. Affected datasets include prose generation, classification, and coding.
LLM NameMinotaur 13B Fixed GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/minotaur-13B-fixed-GPTQ 
Model NameMinotaur 13B Fixed
Model CreatorOpen Access AI Collective
Base Model(s)  Minotaur 13B Fixed   openaccess-ai-collective/minotaur-13b-fixed
Model Size13b
Required VRAM7.5 GB
Updated2025-08-21
MaintainerTheBloke
Model Typellama
Instruction-BasedYes
Model Files  7.5 GB
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length2048
Model Max Length2048
Transformers Version4.28.0.dev0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Minotaur 13B Fixed GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
CodeLlama 13B Instruct GPTQ16K / 7.3 GB111539
NexusRaven 13B GPTQ16K / 7.3 GB117
...sianai 13B Chat Bilingual GPTQ8K / 7.3 GB144
Leo Hessianai 13B Chat GPTQ8K / 7.3 GB121
...lama2 13B Orca V2 8K 3166 GPTQ8K / 7.3 GB1925
Swallow 13B Instruct GPTQ4K / 7.5 GB62
Mythalion 13B GPTQ4K / 7.3 GB109952
Pygmalion 2 13B GPTQ4K / 7.3 GB11641
...2 13B Ft Instruct Es Gptq 3bit4K / 5.7 GB53
Speechless Llama2 13B GPTQ4K / 7.3 GB92
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/minotaur-13B-fixed-GPTQ.

Rank the Minotaur 13B Fixed GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50804 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124