Llama 3 Chatty 2x8B AWQ by solidrust

 ยป  All LLMs  ยป  solidrust  ยป  Llama 3 Chatty 2x8B AWQ   URL Share it on

  4-bit   Autotrain compatible   Awq Base model:quantized:undi95/ll... Base model:undi95/llama-3-chat...   Conversational   Endpoints compatible   Mixtral   Moe   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Llama 3 Chatty 2x8B AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 3 Chatty 2x8B AWQ (solidrust/Llama-3-Chatty-2x8B-AWQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

Llama 3 Chatty 2x8B AWQ Parameters and Internals

Model Type 
text-generation
Additional Notes 
AWQ is an efficient, accurate, and fast low-bit weight quantization method that supports 4-bit quantization. It offers faster Transformers-based inference with equivalent or better quality compared to GPTQ. AWQ models are supported on Linux and Windows with NVidia GPUs. For macOS, use GGUF models.
LLM NameLlama 3 Chatty 2x8B AWQ
Repository ๐Ÿค—https://huggingface.co/solidrust/Llama-3-Chatty-2x8B-AWQ 
Base Model(s)  Llama 3 Chatty 2x8B   Undi95/Llama-3-Chatty-2x8B
Model Size13.7b
Required VRAM8.7 GB
Updated2026-01-10
Maintainersolidrust
Model Typemixtral
Model Files  5.0 GB: 1-of-2   3.7 GB: 2-of-2
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureMixtralForCausalLM
Context Length8192
Model Max Length8192
Transformers Version4.41.0
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|begin_of_text|>
Vocabulary Size128256
Torch Data Typefloat16

Best Alternatives to Llama 3 Chatty 2x8B AWQ

Best Alternatives
Context / RAM
Downloads
Likes
L3.1 MoE 2x8B V0.2128K / 27.3 GB136
... Deepseek DeepHermes E32 13.7B128K / 54.9 GB50
HAI SER128K / 27.3 GB1916
L3.1 Celestial Stone 2x8B128K / 27.3 GB3123
...ma 3 2x8B Instruct MoE 64K Ctx64K / 27.3 GB124
Defne Llama3 2x8B8K / 27.4 GB20715
Penny Llama3 2x8b8K / 27.3 GB51
Kilo 2x8B8K / 27.5 GB121
Llama 3 Elyza Youko MoE 2x8B8K / 27.3 GB90
Llama 3 ELYZA Hermes 2x8B8K / 27.4 GB01
Note: green Score (e.g. "73.2") means that the model is better than solidrust/Llama-3-Chatty-2x8B-AWQ.

Rank the Llama 3 Chatty 2x8B AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51611 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124