Zephyr Orpo 141B A35b V0.1 AWQ by MaziyarPanahi

 ยป  All LLMs  ยป  MaziyarPanahi  ยป  Zephyr Orpo 141B A35b V0.1 AWQ   URL Share it on

Zephyr Orpo 141B A35b V0.1 AWQ is an open-source language model by MaziyarPanahi. Features: 19.2b LLM, VRAM: 73.7GB, Context: 64K, MoE, Quantized, LLM Explorer Score: 0.13.

  Arxiv:2311.07911   Arxiv:2403.07691   4-bit   Autotrain compatible   Awq Base model:huggingfaceh4/zephy... Base model:mistral-community/m... Base model:quantized:huggingfa...   Conversational Dataset:argilla/distilabel-cap...   Endpoints compatible   Finetuned   Generated from trainer   Has space   Mixtral   Moe   Orpo   Quantized   Region:us   Safetensors   Sharded   Tensorboard   Tensorflow   Trl

Zephyr Orpo 141B A35b V0.1 AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Zephyr Orpo 141B A35b V0.1 AWQ (MaziyarPanahi/zephyr-orpo-141b-A35b-v0.1-AWQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

Zephyr Orpo 141B A35b V0.1 AWQ Parameters and Internals

Model Type 
text-generation
Additional Notes 
Model is a quantized (AWQ) version of the base model for efficient inference.
Input Output 
Input Format:
Text prompt as string
Accepted Modalities:
text
Output Format:
Generated text response
LLM NameZephyr Orpo 141B A35b V0.1 AWQ
Repository ๐Ÿค—https://huggingface.co/MaziyarPanahi/zephyr-orpo-141b-A35b-v0.1-AWQ 
Model Namezephyr-orpo-141b-A35b-v0.1-AWQ
Model CreatorHuggingFaceH4
Base Model(s)  HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1   HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1
Model Size19.2b
Required VRAM73.7 GB
Updated2025-09-23
MaintainerMaziyarPanahi
Model Typemixtral
Model Files  5.0 GB: 1-of-15   5.0 GB: 2-of-15   5.0 GB: 3-of-15   5.0 GB: 4-of-15   5.0 GB: 5-of-15   5.0 GB: 6-of-15   5.0 GB: 7-of-15   5.0 GB: 8-of-15   5.0 GB: 9-of-15   5.0 GB: 10-of-15   5.0 GB: 11-of-15   5.0 GB: 12-of-15   5.0 GB: 13-of-15   5.0 GB: 14-of-15   3.7 GB: 15-of-15
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureMixtralForCausalLM
Context Length65536
Model Max Length65536
Transformers Version4.38.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Zephyr Orpo 141B A35b V0.1 AWQ

Best Alternatives
Context / RAM
Downloads
Likes
Mixtral 8x22B V0.1 AWQ64K / 73.7 GB794336
...olphin 2.9.2 Mixtral 8x22b AWQ64K / 73.7 GB50
MixTAO 19B Pass32K / 38.1 GB32
Lorge 2x7B UAMM32K / 38.2 GB160
Multimerge 19B Pass32K / 38 GB100
Mistralmath 15B Pass32K / 38.5 GB110
TaoPassthrough 15B S32K / 38.4 GB50
Raccoon Small32K / 38.4 GB741
Raccoon Small Float3232K / 76.7 GB230
Mixtral 11Bx2 MoE 19B4K / 38.4 GB117938
Note: green Score (e.g. "73.2") means that the model is better than MaziyarPanahi/zephyr-orpo-141b-A35b-v0.1-AWQ.

Rank the Zephyr Orpo 141B A35b V0.1 AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52628 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a