OpenBezoar HH RLHF DPO by SurgeGlobal

 ยป  All LLMs  ยป  SurgeGlobal  ยป  OpenBezoar HH RLHF DPO   URL Share it on

  Arxiv:2305.18290   Arxiv:2306.02707   Arxiv:2404.12195   Autotrain compatible Base model:finetune:surgegloba... Base model:surgeglobal/openbez...   Dataset:anthropic/hh-rlhf   En   Endpoints compatible   Llama   Pytorch   Region:us   Safetensors

OpenBezoar HH RLHF DPO Benchmarks

OpenBezoar HH RLHF DPO (SurgeGlobal/OpenBezoar-HH-RLHF-DPO)
๐ŸŒŸ Advertise your project ๐Ÿš€

OpenBezoar HH RLHF DPO Parameters and Internals

Model Type 
text-generation
Use Cases 
Limitations:
The model might not consistently show improved abilities to follow instructions, and it could respond inappropriately or get stuck in loops., Although this model is aligned to human preferences and has been evaluated for performance, it is not guaranteed that it will refrain from generating harmful content exclusively., Caution is urged against relying on this model for production or adjacent use-cases.
Training Details 
Data Sources:
Anthropic's HH-RLHF Dataset
Data Volume:
First 100K examples
Methodology:
Direct Preference Optimization (DPO)
Model Architecture:
OpenLLaMA 3B v2 architecture
Input Output 
Input Format:
Modified version of the Alpaca prompt template
Performance Tips:
Utilize the Alpaca prompt template to obtain best responses for instruction-related tasks.
LLM NameOpenBezoar HH RLHF DPO
Repository ๐Ÿค—https://huggingface.co/SurgeGlobal/OpenBezoar-HH-RLHF-DPO 
Base Model(s)  OpenBezoar HH RLHF SFT   SurgeGlobal/OpenBezoar-HH-RLHF-SFT
Model Size3b
Required VRAM6.8 GB
Updated2025-06-19
MaintainerSurgeGlobal
Model Typellama
Model Files  6.8 GB   6.8 GB
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licensecc-by-nc-4.0
Context Length2048
Model Max Length2048
Transformers Version4.33.2
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to OpenBezoar HH RLHF DPO

Best Alternatives
Context / RAM
Downloads
Likes
ISA 03 Mini 3B Hybrid Preview256K / 6.5 GB13593
Llama 3.2 3B Instruct128K / 6.5 GB15260101528
Llama 3.2 3B128K / 6.5 GB248986584
Hermes 3 Llama 3.2 3B128K / 6.5 GB46478161
DeepSeek R1 Distill Llama 3B128K / 6.5 GB232014
Cogito V1 Preview Llama 3B128K / 7.2 GB294195
Orpheus 3B 0.1 Ft128K / 6.6 GB278894
Calme 3.1 Llamaloi 3B128K / 10.6 GB43171
Llama 3.2 3B Bespoke Thought128K / 6.4 GB16053
Llama 3.2 3B Instruct128K / 6.5 GB18863568
Note: green Score (e.g. "73.2") means that the model is better than SurgeGlobal/OpenBezoar-HH-RLHF-DPO.

Rank the OpenBezoar HH RLHF DPO Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 48274 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124