Hippogriff 30B Chat GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Hippogriff 30B Chat GPTQ   URL Share it on

  4-bit   Autotrain compatible Base model:openaccess-ai-colle... Base model:quantized:openacces...   Dataset:gsm8k   Dataset:hellaswag Dataset:metaeval/scienceqa tex... Dataset:openai/summarize from ...   Dataset:openassistant/oasst1   Dataset:qingyisi/alpaca-cot   Dataset:riddle sense Dataset:teknium/gpt4-llm-clean... Dataset:teknium/gpteacher-gene...   En   Gptq   Llama   Quantized   Region:us   Safetensors

Hippogriff 30B Chat GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Hippogriff 30B Chat GPTQ (TheBloke/hippogriff-30b-chat-GPTQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

Hippogriff 30B Chat GPTQ Parameters and Internals

Model Type 
llama, text-generation
Use Cases 
Areas:
research, chat applications
Limitations:
Can produce problematic outputs., Struggles with tasks related to math., May produce socially unacceptable text.
Considerations:
We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
Additional Notes 
Build Hippogriff was built with Axolotl on 8xA100 80GB for 1.5 epochs taking approximately 12 hours.
Training Details 
Data Sources:
OpenAssistant/oasst1 - cleaned dataset, similar to Guanaco, synthetic jokes generation and explanation derived from reddit jokes dataset, synthetic prose generation and rewriting self-chat, Q&A; based on provided context, self instruct augmented logic_inference_oa, de-duped pygmalion dataset, filtered down to RP data, cleaned, english only, 25%, riddle_sense - instruct augmented, hellaswag, updated for detailed explanations w 30K+ rows, gsm8k - instruct augmented, ewof/code-alpaca-instruct-unfiltered synthetic self chat dataset derived from about 1000 rows, subset of QingyiSi/Alpaca-CoT for roleplay and CoT, GPTeacher-General-Instruct, ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, hellaswag - 5K row subset of instruct augmented for concise responses, metaeval/ScienceQA_text_only - instruct for concise responses, openai/summarize_from_feedback - instruct augmented tl;dr summarization
Training Time:
12 hours for 1.5 epochs
Hardware Used:
8xA100 80GB
Safety Evaluation 
Ethical Considerations:
Hippogriff has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses. The model may produce problematic outputs.
Input Output 
Input Format:
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
Performance Tips:
Never depend upon Hippogriff to produce factually accurate output.
LLM NameHippogriff 30B Chat GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/hippogriff-30b-chat-GPTQ 
Model NameHippogriff 30B Chat
Model CreatorOpen Access AI Collective
Base Model(s)  Hippogriff 30B Chat   openaccess-ai-collective/hippogriff-30b-chat
Model Size30b
Required VRAM16.9 GB
Updated2025-08-18
MaintainerTheBloke
Model Typellama
Model Files  16.9 GB
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length2048
Model Max Length2048
Transformers Version4.30.0.dev0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to Hippogriff 30B Chat GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
... 30B Supercot SuperHOT 8K GPTQ8K / 16.9 GB315
GPlatty 30B SuperHOT 8K GPTQ8K / 16.9 GB177
Platypus 30B SuperHOT 8K GPTQ8K / 16.9 GB134
Tulu 30B SuperHOT 8K GPTQ8K / 16.9 GB155
Yayi2 30B Llama GPTQ4K / 17 GB72
WizardLM 30B GPTQ2K / 16.9 GB201918
Llama 30B FINAL MODEL MINI2K / 19.4 GB51
...2 Llama 30B 7K Steps Gptq 2bit2K / 9.5 GB142
...Assistant SFT 7 Llama 30B GPTQ2K / 16.9 GB202935
WizardLM 30B V1.0 GPTQ2K / 16.9 GB121
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/hippogriff-30b-chat-GPTQ.

Rank the Hippogriff 30B Chat GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50738 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124