Hippogriff 30B Chat by openaccess-ai-collective

 ยป  All LLMs  ยป  openaccess-ai-collective  ยป  Hippogriff 30B Chat   URL Share it on

  Autotrain compatible   Dataset:gsm8k   Dataset:hellaswag Dataset:metaeval/scienceqa tex... Dataset:openai/summarize from ...   Dataset:openassistant/oasst1   Dataset:qingyisi/alpaca-cot   Dataset:riddle sense Dataset:teknium/gpt4-llm-clean... Dataset:teknium/gpteacher-gene...   En   Endpoints compatible   Llama   Pytorch   Region:us   Sharded

Hippogriff 30B Chat Benchmarks

Hippogriff 30B Chat (openaccess-ai-collective/hippogriff-30b-chat)
๐ŸŒŸ Advertise your project ๐Ÿš€

Hippogriff 30B Chat Parameters and Internals

Model Type 
text-generation
Additional Notes 
Hippogriff differs from Manticore due to the exclusion of specific datasets like WizardLM, WizardVicuna, Alpaca, or ShareGPT. The model's performance is better in prose and Q&A but struggles with mathematical tasks.
Supported Languages 
en (Proficient)
Training Details 
Data Sources:
OpenAssistant/oasst1, synthetic jokes generation and explanation derived from reddit jokes dataset, synthetic prose generation and rewriting self-chat, Q&A based on provided context, self instruct augmented logic_inference_oa, de-duped pygmalion dataset, filtered down to RP data, cleaned, english only, 25%, riddle_sense, hellaswag, updated for detailed explanations w 30K+ rows, gsm8k, ewof/code-alpaca-instruct-unfiltered, subset of QingyiSi/Alpaca-CoT for roleplay and CoT, GPTeacher-General-Instruct, ARC-Easy & ARC-Challenge, hellaswag, metaeval/ScienceQA_text_only, openai/summarize_from_feedback
Methodology:
Fine-tuning with specific datasets
Training Time:
12 hours on 8xA100 80GB for 1.5 epochs
Hardware Used:
8xA100 80GB
Input Output 
Input Format:
USER:, ASSISTANT:, and <|system|>, <|user|> and <|model|> tokens
Accepted Modalities:
text
Output Format:
Textual responses
LLM NameHippogriff 30B Chat
Repository ๐Ÿค—https://huggingface.co/openaccess-ai-collective/hippogriff-30b-chat 
Model Size30b
Required VRAM65.2 GB
Updated2025-08-18
Maintaineropenaccess-ai-collective
Model Typellama
Model Files  9.8 GB: 1-of-7   10.0 GB: 2-of-7   9.9 GB: 3-of-7   9.9 GB: 4-of-7   9.9 GB: 5-of-7   10.0 GB: 6-of-7   5.7 GB: 7-of-7
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Context Length2048
Model Max Length2048
Transformers Version4.30.0.dev0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typebfloat16

Quantized Models of the Hippogriff 30B Chat

Model
Likes
Downloads
VRAM
Hippogriff 30B Chat GGUF113213 GB
Hippogriff 30B Chat AWQ1717 GB
Hippogriff 30B Chat GPTQ132916 GB

Best Alternatives to Hippogriff 30B Chat

Best Alternatives
Context / RAM
Downloads
Likes
Flash Llama 30M 2000132K / 0.1 GB19030
Smaug Slerp 30B V0.132K / 60.4 GB50
Tenebra 30B Alpha0116K / 65 GB1812
Llama33b 16K16K / 65.2 GB151
Yayi2 30B Llama4K / 121.2 GB92222
... Tokens By Perplexity Bottom K4K / 5.4 GB50
...via Sample With Temperature2.04K / 5.4 GB50
...lue Sample With Temperature2.04K / 5.4 GB50
... Tokens By Writing Style Top K4K / 5.4 GB50
Yayi2 30B Llama4K / 121.2 GB1822
Note: green Score (e.g. "73.2") means that the model is better than openaccess-ai-collective/hippogriff-30b-chat.

Rank the Hippogriff 30B Chat Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50738 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124