Norobara ZLoss 8x7B AWQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Norobara ZLoss 8x7B AWQ   URL Share it on

  4-bit   Autotrain compatible   Awq Base model:doctor-shotgun/noro... Base model:quantized:doctor-sh...   Conversational Dataset:doctor-shotgun/capybar... Dataset:doctor-shotgun/no-robo... Dataset:huggingfaceh4/no robot...   Dataset:ldjnr/capybara   Dataset:ldjnr/verified-camel Dataset:unalignment/toxic-dpo-...   En   Mixtral   Moe   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Norobara ZLoss 8x7B AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Norobara ZLoss 8x7B AWQ (TheBloke/Norobara-ZLoss-8x7B-AWQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

Norobara ZLoss 8x7B AWQ Parameters and Internals

Model Type 
mixtral
Use Cases 
Primary Use Cases:
Uncensored general instruction following model
Limitations:
No ethical alignment was applied to prevent the generation of toxic or harmful outputs.
Additional Notes 
This model was part of an experiment to create an effective uncensored instruction-following model while exploring various loss techniques. The use of toxic data indicates a focus on uncovering potential biases and limitations in model generation.
Training Details 
Data Sources:
LDJnr/Capybara, unalignment/toxic-dpo-v0.1, LDJnr/Verified-Camel, HuggingFaceH4/no_robots, Doctor-Shotgun/no-robots-sharegpt, Doctor-Shotgun/capybara-sharegpt
Methodology:
This is an experimental instruct-tuned model based on ZLoss and Megablocks-based fork of transformers. It is trained as a QLora adapter for 3 epochs using a single H100 GPU for around 13 hours.
Training Time:
13 hours
Hardware Used:
single H100 GPU
Responsible Ai Considerations 
Fairness:
The model will show biases present in the base model.
Input Output 
Input Format:
### Instruction: {system_message} ### Input: {prompt} ### Response: '
LLM NameNorobara ZLoss 8x7B AWQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/Norobara-ZLoss-8x7B-AWQ 
Model NameNorobara ZLoss 8X7B
Model CreatorDoctor Shotgun
Base Model(s)  Norobara ZLoss 8x7B   Doctor-Shotgun/Norobara-ZLoss-8x7B
Model Size6.5b
Required VRAM24.7 GB
Updated2025-09-23
MaintainerTheBloke
Model Typemixtral
Model Files  10.0 GB: 1-of-3   10.0 GB: 2-of-3   4.7 GB: 3-of-3
Supported Languagesen
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureMixtralForCausalLM
Context Length32768
Model Max Length32768
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Norobara ZLoss 8x7B AWQ

Best Alternatives
Context / RAM
Downloads
Likes
Mixtral 8x7B Instruct V0.1 AWQ32K / 24.7 GB1181458
Dolphin 2.7 Mixtral 8x7b AWQ32K / 24.7 GB736323
...kaLM Mixtral 8x7B V0.2 DPO AWQ32K / 24.7 GB60
Mixtral 8x7B Instruct V0.1 AWQ32K / 24.7 GB60
Karakuri Lm 8x7b Chat V0.1 AWQ32K / 24.7 GB50
Taiwan LLM 8x7B DPO AWQ32K / 24.7 GB71
Functionary Medium V2.4 AWQ32K / 24.7 GB53
Mixtral Instruct AWQ32K / 24.7 GB175043
Synatra Mixtral 8x7B AWQ32K / 24.7 GB63
...xtral Instruct AWQ Clone Dec2332K / 24.7 GB60
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Norobara-ZLoss-8x7B-AWQ.

Rank the Norobara ZLoss 8x7B AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51611 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124