Manticore 30B Chat Pyg Alpha by openaccess-ai-collective

 ยป  All LLMs  ยป  openaccess-ai-collective  ยป  Manticore 30B Chat Pyg Alpha   URL Share it on

  Autotrain compatible Dataset:anon8231489123/sharegp... Dataset:ehartford/wizard vicun... Dataset:ehartford/wizardlm alp... Dataset:ewof/code-alpaca-instr...   Dataset:gsm8k   Dataset:hellaswag Dataset:metaeval/scienceqa tex... Dataset:openai/summarize from ...   Dataset:qingyisi/alpaca-cot   Dataset:riddle sense Dataset:teknium/gpt4-llm-clean... Dataset:teknium/gpteacher-gene...   En   Endpoints compatible   Instruct   Llama   Pytorch   Region:us   Safetensors   Sharded   Tensorflow

Manticore 30B Chat Pyg Alpha Benchmarks

Manticore 30B Chat Pyg Alpha (openaccess-ai-collective/manticore-30b-chat-pyg-alpha)
๐ŸŒŸ Advertise your project ๐Ÿš€

Manticore 30B Chat Pyg Alpha Parameters and Internals

Model Type 
text-generation
Additional Notes 
Special thanks to Nanobit for helping with Axolotl, TheBloke for quantizing these models are more accessible to all, ehartford for cleaned datasets, and 0x000011b for the RP dataset.
Training Details 
Data Sources:
anon8231489123/ShareGPT_Vicuna_unfiltered, ehartford/wizard_vicuna_70k_unfiltered, ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered, QingyiSi/Alpaca-CoT, teknium/GPT4-LLM-Cleaned, teknium/GPTeacher-General-Instruct, metaeval/ScienceQA_text_only, hellaswag, openai/summarize_from_feedback, riddle_sense, gsm8k, ewof/code-alpaca-instruct-unfiltered
Data Volume:
40% of the datasets
Methodology:
fine-tuning
Training Time:
14 hours
Hardware Used:
8xA100 80GB
Model Architecture:
Llama
Input Output 
Input Format:
chat only style prompts using `USER:` and `ASSISTANT:` or `<|system|>, <|user|> and <|model|>` tokens
Accepted Modalities:
text
Output Format:
text
Release Notes 
Notes:
Alpha release of checkpoint before train and eval loss spikes. Some alignment issues were noted.
LLM NameManticore 30B Chat Pyg Alpha
Repository ๐Ÿค—https://huggingface.co/openaccess-ai-collective/manticore-30b-chat-pyg-alpha 
Model Size30b
Required VRAM65.2 GB
Updated2025-08-22
Maintaineropenaccess-ai-collective
Model Typellama
Instruction-BasedYes
Model Files  9.8 GB: 1-of-7   10.0 GB: 2-of-7   9.9 GB: 3-of-7   9.9 GB: 4-of-7   9.9 GB: 5-of-7   10.0 GB: 6-of-7   5.7 GB: 7-of-7   9.8 GB: 1-of-7   10.0 GB: 2-of-7   9.9 GB: 3-of-7   9.9 GB: 4-of-7   9.9 GB: 5-of-7   10.0 GB: 6-of-7   5.7 GB: 7-of-7
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Model Max Length2048
Transformers Version4.28.0.dev0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Manticore 30B Chat Pyg Alpha

Best Alternatives
Context / RAM
Downloads
Likes
Llama 30B Instruct 20482K / 65.2 GB3125102
...ct 2048 Open Platypus 100steps2K / 65.2 GB18300
...azarus Instruct PL Lora Unload2K / 65.2 GB18650
...B 2048 Instruct PL Lora Unload2K / 65.2 GB18611
...lama 30B Instruct 2048 PL Lora2K / 65.2 GB18640
Llama 30B Instruct2K / 65.2 GB183623
H2ogpt Oasst1 512 30B HF2K / 65 GB20342
...pt Research Oig Oasst1 512 30B2K / 81.1 GB19903
Open Instruct Human Mix 30B2K / 130.5 GB131
...nsored Instruct PL Lora Unload2K / 65.2 GB18670
Note: green Score (e.g. "73.2") means that the model is better than openaccess-ai-collective/manticore-30b-chat-pyg-alpha.

Rank the Manticore 30B Chat Pyg Alpha Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50835 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124