ShiningValiant 1.2 GGUF by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  ShiningValiant 1.2 GGUF   URL Share it on

  70b Base model:quantized:valiantla... Base model:valiantlabs/llama2-...   En   Gguf   Llama   Llama-2-chat   Llama2   Quantized   Region:us   Shining-valiant   Valiant   Valiant-labs

ShiningValiant 1.2 GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
ShiningValiant 1.2 GGUF (TheBloke/ShiningValiant-1.2-GGUF)
๐ŸŒŸ Advertise your project ๐Ÿš€

ShiningValiant 1.2 GGUF Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
Research, Commercial applications
Primary Use Cases:
Insight, Creativity, Passion and Friendliness
Additional Notes 
Shining Valiant is built on top of Stellar Bright and is friendly, enthusiastic, insightful, knowledgeable, and loves to learn.
Supported Languages 
en (High)
Training Details 
Data Sources:
Public open source data, Private datasets focused on knowledge, enthusiasm, and structured reasoning
Methodology:
Finetuning on multiple runs across private and public data
Model Architecture:
Based on Llama 2's 70b parameter architecture
Input Output 
Input Format:
[INST] <> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <> {prompt}[/INST]
Accepted Modalities:
text
Output Format:
Textual response
Release Notes 
Version:
1.2
Notes:
Finetuned on multiple runs across private and public data. Focus on insight, creativity, passion, and friendliness.
LLM NameShiningValiant 1.2 GGUF
Repository ๐Ÿค—https://huggingface.co/TheBloke/ShiningValiant-1.2-GGUF 
Model NameShiningValiant 1.2
Model CreatorValiant Labs
Base Model(s)  ShiningValiant   ValiantLabs/ShiningValiant
Model Size70b
Required VRAM29.3 GB
Updated2025-09-20
MaintainerTheBloke
Model Typellama
Model Files  29.3 GB   36.1 GB   33.2 GB   29.9 GB   38.9 GB   41.4 GB   39.1 GB   47.5 GB   48.8 GB   47.5 GB
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licensellama2

Best Alternatives to ShiningValiant 1.2 GGUF

Best Alternatives
Context / RAM
Downloads
Likes
...us Qwen3 R1 Llama Distill GGUF0K / 0.8 GB15412
KafkaLM 70B German V0.1 GGUF0K / 25.5 GB338749
CodeLlama 70B Instruct GGUF0K / 25.5 GB273060
...gekit Passthrough Yqhuxcv GGUF0K / 16.9 GB100
CodeLlama 70B Python GGUF0K / 25.5 GB172644
Meta Llama 3 70B Instruct GGUF0K / 26.4 GB2344
DAD Model V2 70B Q40K / 42.5 GB100
CodeLlama 70B Hf GGUF0K / 25.5 GB65842
Llama 2 70B Guanaco QLoRA GGUF0K / 29.3 GB220
WizardMath 70B V1.0 GGUF0K / 29.3 GB123057
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/ShiningValiant-1.2-GGUF.

Rank the ShiningValiant 1.2 GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51483 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124