OpenBezoar HH RLHF DPO by SurgeGlobal

 ยป  All LLMs  ยป  SurgeGlobal  ยป  OpenBezoar HH RLHF DPO   URL Share it on

  Arxiv:2305.18290   Arxiv:2306.02707   Arxiv:2404.12195   Autotrain compatible Base model:finetune:surgegloba... Base model:surgeglobal/openbez...   Dataset:anthropic/hh-rlhf   En   Endpoints compatible   Llama   Pytorch   Region:us   Safetensors

OpenBezoar HH RLHF DPO Benchmarks

๐ŸŒŸ Advertise your project ๐Ÿš€

OpenBezoar HH RLHF DPO Parameters and Internals

Model Type 
text-generation
Use Cases 
Limitations:
The model might not consistently show improved abilities to follow instructions, and it could respond inappropriately or get stuck in loops., Although this model is aligned to human preferences and has been evaluated for performance, it is not guaranteed that it will refrain from generating harmful content exclusively., Caution is urged against relying on this model for production or adjacent use-cases.
Training Details 
Data Sources:
Anthropic's HH-RLHF Dataset
Data Volume:
First 100K examples
Methodology:
Direct Preference Optimization (DPO)
Model Architecture:
OpenLLaMA 3B v2 architecture
Input Output 
Input Format:
Modified version of the Alpaca prompt template
Performance Tips:
Utilize the Alpaca prompt template to obtain best responses for instruction-related tasks.
LLM NameOpenBezoar HH RLHF DPO
Repository ๐Ÿค—https://huggingface.co/SurgeGlobal/OpenBezoar-HH-RLHF-DPO 
Base Model(s)  OpenBezoar HH RLHF SFT   SurgeGlobal/OpenBezoar-HH-RLHF-SFT
Model Size3b
Required VRAM6.8 GB
Updated2025-06-09
MaintainerSurgeGlobal
Model Typellama
Model Files  6.8 GB   6.8 GB
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licensecc-by-nc-4.0
Context Length2048
Model Max Length2048
Transformers Version4.33.2
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16
OpenBezoar HH RLHF DPO (SurgeGlobal/OpenBezoar-HH-RLHF-DPO)

Best Alternatives to OpenBezoar HH RLHF DPO

Best Alternatives
Context / RAM
Downloads
Likes
ISA 03 Mini 3B Hybrid Preview256K / 6.5 GB4913
Llama 3.2 3B Instruct128K / 6.5 GB15180021505
Llama 3.2 3B128K / 6.5 GB285902574
Hermes 3 Llama 3.2 3B128K / 6.5 GB43292159
DeepSeek R1 Distill Llama 3B128K / 6.5 GB94713
Cogito V1 Preview Llama 3B128K / 7.2 GB268395
Orpheus 3B 0.1 Ft128K / 6.6 GB214883
Calme 3.1 Llamaloi 3B128K / 10.6 GB35511
Orpheus 3B 0.1 Pretrained128K / 6.6 GB104510
Llama 3.2 3B Instruct128K / 6.5 GB21505466
Note: green Score (e.g. "73.2") means that the model is better than SurgeGlobal/OpenBezoar-HH-RLHF-DPO.

Rank the OpenBezoar HH RLHF DPO Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 48023 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124