UNAversal 8x7B V1beta AWQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  UNAversal 8x7B V1beta AWQ   URL Share it on

  4-bit   Autotrain compatible   Awq Base model:fblgit/unaversal-8x... Base model:quantized:fblgit/un...   Conversational   En   Juanako   Mixtral   Moe   Quantized   Region:us   Safetensors   Sharded   Tensorflow   Una

UNAversal 8x7B V1beta AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
UNAversal 8x7B V1beta AWQ (TheBloke/UNAversal-8x7B-v1beta-AWQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

UNAversal 8x7B V1beta AWQ Parameters and Internals

Model Type 
mixtral
Additional Notes 
The model uses AWQ quantization method and is highly experimental, intended for further tuning and integration in research and development.
Supported Languages 
en (high proficiency)
Input Output 
Input Format:
{prompt}
Accepted Modalities:
text
Performance Tips:
For AutoAWQ inference, use AutoAWQ 0.1.8 or later. Ensure the latest version of required software is used.
LLM NameUNAversal 8x7B V1beta AWQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/UNAversal-8x7B-v1beta-AWQ 
Model NameUNAversal 8X7B v1Beta
Model CreatorFBL
Base Model(s)  UNAversal 8x7B V1beta   fblgit/UNAversal-8x7B-v1beta
Model Size6.5b
Required VRAM24.7 GB
Updated2025-09-23
MaintainerTheBloke
Model Typemixtral
Model Files  10.0 GB: 1-of-3   10.0 GB: 2-of-3   4.7 GB: 3-of-3
Supported Languagesen
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureMixtralForCausalLM
Licensecc-by-nc-sa-4.0
Context Length32768
Model Max Length32768
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to UNAversal 8x7B V1beta AWQ

Best Alternatives
Context / RAM
Downloads
Likes
Mixtral 8x7B Instruct V0.1 AWQ32K / 24.7 GB6749458
Dolphin 2.7 Mixtral 8x7b AWQ32K / 24.7 GB736323
...kaLM Mixtral 8x7B V0.2 DPO AWQ32K / 24.7 GB60
Mixtral 8x7B Instruct V0.1 AWQ32K / 24.7 GB60
Karakuri Lm 8x7b Chat V0.1 AWQ32K / 24.7 GB50
Mixtral Instruct AWQ32K / 24.7 GB277143
Taiwan LLM 8x7B DPO AWQ32K / 24.7 GB71
Functionary Medium V2.4 AWQ32K / 24.7 GB53
Mixtral 8x7b V0.1 AWQ32K / 24.7 GB130310
H2ogpt Mixtral 8x7b 32K AWQ32K / 24.7 GB120
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/UNAversal-8x7B-v1beta-AWQ.

Rank the UNAversal 8x7B V1beta AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51534 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124