ShiningValiant 1.3 AWQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  ShiningValiant 1.3 AWQ   URL Share it on

  4-bit   70b   Autotrain compatible   Awq Base model:quantized:valiantla... Base model:valiantlabs/llama2-...   En   Llama   Llama-2-chat   Llama2   Quantized   Region:us   Safetensors   Sharded   Shining-valiant   Tensorflow   Valiant   Valiant-labs

ShiningValiant 1.3 AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
ShiningValiant 1.3 AWQ (TheBloke/ShiningValiant-1.3-AWQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

ShiningValiant 1.3 AWQ Parameters and Internals

Model Type 
llama
Use Cases 
Areas:
Research, Commercial applications
Additional Notes 
Shining Valiant is made available by Valiant Labs and utilizes fine-tuning over multiple datasets, emphasizing its friendly and knowledgeable traits.
Supported Languages 
en (high)
Training Details 
Methodology:
AWQ (efficient, accurate and fast low-bit weight quantization)
Context Length:
4096
Model Architecture:
Llama 2's 70b parameter architecture finetuned on private datasets
Input Output 
Input Format:
[INST] <> {system_message} <> {prompt} [/INST]
Accepted Modalities:
text
Output Format:
text
Performance Tips:
Using AWQ quantization for better performance with text-based Transformers.
Release Notes 
Version:
1.3
Notes:
Shining Valiant is a chat model based on Llama 2 architecture, finetuned for insight and creativity.
LLM NameShiningValiant 1.3 AWQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/ShiningValiant-1.3-AWQ 
Model NameShiningValiant 1.3
Model CreatorValiant Labs
Base Model(s)  ShiningValiant   ValiantLabs/ShiningValiant
Model Size70b
Required VRAM36.6 GB
Updated2025-09-20
MaintainerTheBloke
Model Typellama
Model Files  9.9 GB: 1-of-4   9.9 GB: 2-of-4   9.9 GB: 3-of-4   6.9 GB: 4-of-4
Supported Languagesen
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length4096
Model Max Length4096
Transformers Version4.35.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to ShiningValiant 1.3 AWQ

Best Alternatives
Context / RAM
Downloads
Likes
...0B Instruct Gradient 1048K AWQ1024K / 39.9 GB121
...70B Instruct Gradient 262K AWQ256K / 39.9 GB60
Llama 3.3 70B Instruct AWQ128K / 39.9 GB3649906
Llama 3.3 70B Instruct AWQ128K / 39.9 GB15931735
...lama 3.3 70B Instruct AWQ INT4128K / 39.9 GB3142825
... SauerkrautLM 70B Instruct AWQ128K / 39.9 GB795
MultiVerse 70B AWQ32K / 41.3 GB72
Opus V1.2 70B AWQ32K / 36.7 GB81
QuartetAnemoi 70B T0.0001 AWQ31K / 36.7 GB61
Senku 70B AWQ 4bit GEMM31K / 36.7 GB61
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/ShiningValiant-1.3-AWQ.

Rank the ShiningValiant 1.3 AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51483 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124