StableBeluga2 70B GGUF by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  StableBeluga2 70B GGUF   URL Share it on

  Arxiv:2306.02707   Arxiv:2307.09288 Base model:quantized:stability... Base model:stabilityai/stableb... Dataset:conceptofmind/cot subm... Dataset:conceptofmind/flan2021... Dataset:conceptofmind/niv2 sub... Dataset:conceptofmind/t0 submi...   En   Gguf   Llama   Quantized   Region:us

StableBeluga2 70B GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
StableBeluga2 70B GGUF (TheBloke/StableBeluga2-70B-GGUF)
๐ŸŒŸ Advertise your project ๐Ÿš€

StableBeluga2 70B GGUF Parameters and Internals

Model Type 
llama
Use Cases 
Areas:
text generation, research
Limitations:
Beluga's potential outputs cannot be predicted in advance, Bias or objectionable responses possible
Considerations:
Developers should perform safety testing before deploying.
Additional Notes 
Stable Beluga 2 is finetuned on an Orca style Dataset and is a Llama2 70B model.
Supported Languages 
English (Native)
Training Details 
Data Sources:
conceptofmind/cot_submix_original, conceptofmind/flan2021_submix_original, conceptofmind/t0_submix_original, conceptofmind/niv2_submix_original
Methodology:
Fine-tuning on an Orca-style Dataset
Responsible Ai Considerations 
Fairness:
Beluga's potential outputs cannot be predicted in advance, and the model may produce inaccurate, biased or objectionable responses.
Mitigation Strategies:
Safety testing and tuning tailored to specific applications is recommended.
Input Output 
Input Format:
### System: {system_message} ### User: {prompt} ### Assistant:
Accepted Modalities:
text
LLM NameStableBeluga2 70B GGUF
Repository ๐Ÿค—https://huggingface.co/TheBloke/StableBeluga2-70B-GGUF 
Model NameStableBeluga2
Model CreatorStability AI
Base Model(s)  StableBeluga2   stabilityai/StableBeluga2
Model Size70b
Required VRAM29.3 GB
Updated2025-08-20
MaintainerTheBloke
Model Typellama
Model Files  29.3 GB   36.1 GB   33.2 GB   29.9 GB   38.9 GB   41.4 GB   39.1 GB   47.5 GB   48.8 GB   47.5 GB
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licensellama2

Best Alternatives to StableBeluga2 70B GGUF

Best Alternatives
Context / RAM
Downloads
Likes
...gekit Passthrough Yqhuxcv GGUF0K / 16.9 GB60
CodeLlama 70B Instruct GGUF0K / 25.5 GB268459
CodeLlama 70B Python GGUF0K / 25.5 GB140943
KafkaLM 70B German V0.1 GGUF0K / 25.5 GB131945
Meta Llama 3 70B Instruct GGUF0K / 26.4 GB904
DAD Model V2 70B Q40K / 42.5 GB60
CodeLlama 70B Hf GGUF0K / 25.5 GB57642
Llama 2 70B Guanaco QLoRA GGUF0K / 29.3 GB120
Swallow 70B Instruct GGUF0K / 29.4 GB5459
Euryale 1.3 L2 70B GGUF0K / 29.3 GB208617
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/StableBeluga2-70B-GGUF.

Rank the StableBeluga2 70B GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50767 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124