Rhea 72B V0.5 4bit 128g by titan087

 ยป  All LLMs  ยป  titan087  ยป  Rhea 72B V0.5 4bit 128g   URL Share it on

  4-bit   4bit   Autotrain compatible   En   Endpoints compatible   Gptq   Llama   Quantized   Region:us   Safetensors

Rhea 72B V0.5 4bit 128g Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Rhea 72B V0.5 4bit 128g (titan087/Rhea-72b-v0.5-4bit-128g)
๐ŸŒŸ Advertise your project ๐Ÿš€

Rhea 72B V0.5 4bit 128g Parameters and Internals

Model Type 
fine-tuned, self-supervised learning
Additional Notes 
Rhea project conducts research on various learning methods to improve LLM model performance, including creating datasets using SGD.
Supported Languages 
en (proficient)
Training Details 
Methodology:
fine-tuning using the nox framework, Self-Generated Dataset Creation Method for DPO Learning (SGD)
LLM NameRhea 72B V0.5 4bit 128g
Repository ๐Ÿค—https://huggingface.co/titan087/Rhea-72b-v0.5-4bit-128g 
Model Size72b
Required VRAM41.3 GB
Updated2025-09-23
Maintainertitan087
Model Typellama
Model Files  41.3 GB
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq|4bit
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.39.0.dev0
Tokenizer ClassGPT2Tokenizer
Vocabulary Size152064
Torch Data Typefloat16

Best Alternatives to Rhea 72B V0.5 4bit 128g

Best Alternatives
Context / RAM
Downloads
Likes
Smaug 72B V0.1 GPTQ32K / 41.3 GB72
Smaug 72B V0.1 GPTQ32K / 41.3 GB58
MoMo 72B Lora 1.8.7 DPO GPTQ32K / 41.3 GB77
Smaug 72B V0.1 2.4bpw H6 EXL232K / 24.5 GB61
2 Pro Math128K / 141.9 GB90
Smaug 72B V0.132K / 144.5 GB9165468
TW3 JRGL V232K / 79.7 GB177500
Le Triomphant ECE TW332K / 79.7 GB177744
ECE TW3 JRGL V532K / 159.6 GB97421
MoMo 72B Lora 1.8.7 DPO32K / 208.5 GB235468
Note: green Score (e.g. "73.2") means that the model is better than titan087/Rhea-72b-v0.5-4bit-128g.

Rank the Rhea 72B V0.5 4bit 128g Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51547 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124