Unmixtraled 22B V0.1 Lerp by thomasgauthier

 ยป  All LLMs  ยป  thomasgauthier  ยป  Unmixtraled 22B V0.1 Lerp   URL Share it on

  Autotrain compatible   Dense   Endpoints compatible   Expert   Mistral   Mixtral   Region:us   Safetensors   Sharded   Tensorflow

Unmixtraled 22B V0.1 Lerp Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Unmixtraled 22B V0.1 Lerp (thomasgauthier/Unmixtraled-22B-v0.1-lerp)
๐ŸŒŸ Advertise your project ๐Ÿš€

Unmixtraled 22B V0.1 Lerp Parameters and Internals

Model Type 
text generation, autoregressive
Use Cases 
Areas:
research
Applications:
text generation tasks
Primary Use Cases:
AI model testing and experimentation
Limitations:
Model outputs gibberish as it was not trained under the dense configuration. Finetuning or merging is needed.
Considerations:
Model requires further finetuning or merging to become useful.
Additional Notes 
The model is based on linear merging of expert components from a Mixtral architecture converted to a Mistral dense configuration.
Training Details 
Methodology:
The model was adapted from a Mixtral architecture to a dense Mistral architecture with the same number of layers, attention heads, and hidden dimensions. Embeddings, attention, layer norms, and LM head weights were taken directly from the 8x22B model, MLP weights are a linear merge of experts 0 to 7 weights.
Model Architecture:
The architecture utilizes embeddings, attention mechanisms, layer normalization, and linear merge of multiple expert MLPs into a dense format.
Input Output 
Accepted Modalities:
text
Performance Tips:
The model outputs gibberish as it was not trained under the dense configuration. Finetuning or merging with task-specific data is needed for improved output.
LLM NameUnmixtraled 22B V0.1 Lerp
Repository ๐Ÿค—https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-lerp 
Model Size22b
Required VRAM44.4 GB
Updated2025-09-21
Maintainerthomasgauthier
Model Typemistral
Model Files  9.9 GB: 1-of-5   10.0 GB: 2-of-5   9.9 GB: 3-of-5   9.9 GB: 4-of-5   4.7 GB: 5-of-5
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length65536
Model Max Length65536
Transformers Version4.39.3
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Unmixtraled 22B V0.1 Lerp

Best Alternatives
Context / RAM
Downloads
Likes
MS Schisandra 22B V0.2128K / 44.7 GB39
...ntheon RP Pure 1.6.2 22B Small128K / 44.7 GB932
MS Meadowlark 22B128K / 44.7 GB6815
...rker The Final Abomination 22B128K / 44.7 GB66
...er The Final Transgression 22B128K / 44.7 GB93
...Darker The Final Directive 22B128K / 44.7 GB60
The Omega Directive M 22B V1.0128K / 44.7 GB62
Retrograde Omega M 22B V1.0128K / 44.7 GB50
Beeper King 22B128K / 44.7 GB67
... V4x1.6.2RP Cydonia VXXX 22B 8128K / 44.7 GB55
Note: green Score (e.g. "73.2") means that the model is better than thomasgauthier/Unmixtraled-22B-v0.1-lerp.

Rank the Unmixtraled 22B V0.1 Lerp Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51507 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124