MythoMax 13B Upstage 65B Instruct FalseBlock by IHaBiS

 ยป  All LLMs  ยป  IHaBiS  ยป  MythoMax 13B Upstage 65B Instruct FalseBlock   URL Share it on

  Autotrain compatible   Endpoints compatible   Instruct   Llama   Region:us   Safetensors   Sharded   Tensorflow

MythoMax 13B Upstage 65B Instruct FalseBlock Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
MythoMax 13B Upstage 65B Instruct FalseBlock (IHaBiS/MythoMax-13b-upstage-65b-instruct-FalseBlock)
๐ŸŒŸ Advertise your project ๐Ÿš€

MythoMax 13B Upstage 65B Instruct FalseBlock Parameters and Internals

Model Type 
Text Generation
Additional Notes 
This model is a result of merging 'llama-65b-instruct' from Upstage and 'MythoMax-L2-13b' from Gryphe using the frankenllama_22b.py script by chargoddard.
Supported Languages 
English (Full Support), Various (Partial Support)
Training Details 
Methodology:
Model merging using frankenllama_22b.py
Model Architecture:
Derived from merging 'llama-65b-instruct' and 'MythoMax-L2-13b'
Release Notes 
Version:
1.0
Notes:
Initial merge of LLAMA models using frankenllama script resulting in a 32.905B parameter model.
LLM NameMythoMax 13B Upstage 65B Instruct FalseBlock
Repository ๐Ÿค—https://huggingface.co/IHaBiS/MythoMax-13b-upstage-65b-instruct-FalseBlock 
Model Size13b
Required VRAM65.8 GB
Updated2025-09-23
MaintainerIHaBiS
Model Typellama
Instruction-BasedYes
Model Files  9.9 GB: 1-of-7   9.7 GB: 2-of-7   9.7 GB: 3-of-7   9.7 GB: 4-of-7   9.7 GB: 5-of-7   9.7 GB: 6-of-7   7.4 GB: 7-of-7
Model ArchitectureLlamaForCausalLM
Context Length4096
Model Max Length4096
Transformers Version4.32.0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to MythoMax 13B Upstage 65B Instruct FalseBlock

Best Alternatives
Context / RAM
Downloads
Likes
NexusRaven V2 13B16K / 26 GB1096469
CodeLlama 13B Instruct Hf16K / 26 GB21962154
CodeLlama 13B MORepair16K / 26 GB32
CodeLlama 13B Instruct Hf16K / 26 GB75726
TableLLM 13B16K / 26 GB130729
NexusRaven 13B16K / 26 GB14104
Panda Coder 13B16K / 26 GB613
... Llama 2 13B Instruct Text2sql16K / 26 GB2727
Gen Sim16K / 0.3 GB72
Llama 3 13B Instruct Ft8K / 26.1 GB92
Note: green Score (e.g. "73.2") means that the model is better than IHaBiS/MythoMax-13b-upstage-65b-instruct-FalseBlock.

Rank the MythoMax 13B Upstage 65B Instruct FalseBlock Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51534 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124