ArliAI Llama 3 8B Instruct ORPO V0.1 by OwenArli

 ยป  All LLMs  ยป  OwenArli  ยป  ArliAI Llama 3 8B Instruct ORPO V0.1   URL Share it on

  Autotrain compatible   Conversational   Endpoints compatible   Instruct   Llama   Region:us   Safetensors   Sharded   Tensorflow

ArliAI Llama 3 8B Instruct ORPO V0.1 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
๐ŸŒŸ Advertise your project ๐Ÿš€

ArliAI Llama 3 8B Instruct ORPO V0.1 Parameters and Internals

Additional Notes 
Despite the toxic datasets to reduce refusals, this model is still relatively safe but refuses less than the original Meta model. As of now ORPO fine-tuning seems to improve some metrics while reducing other metrics.
Training Details 
Data Sources:
https://huggingface.co/datasets/Intel/orca_dpo_pairs, https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo, https://huggingface.co/datasets/unalignment/toxic-dpo-v0.2, https://huggingface.co/datasets/M4-ai/prm_dpo_pairs_cleaned, https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1
Methodology:
ORPO fine-tuning
Input Output 
Input Format:
<|begin_of_text|><|start_header_id|>system<|end_header_id|> {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
LLM NameArliAI Llama 3 8B Instruct ORPO V0.1
Repository ๐Ÿค—https://huggingface.co/OwenArli/ArliAI-Llama-3-8B-Instruct-ORPO-v0.1 
Model Size8b
Required VRAM16.1 GB
Updated2025-06-09
MaintainerOwenArli
Model Typellama
Instruction-BasedYes
Model Files  2.0 GB: 1-of-9   1.9 GB: 2-of-9   2.0 GB: 3-of-9   1.9 GB: 4-of-9   2.0 GB: 5-of-9   1.9 GB: 6-of-9   2.0 GB: 7-of-9   1.3 GB: 8-of-9   1.1 GB: 9-of-9
Model ArchitectureLlamaForCausalLM
Licensellama3
Context Length8192
Model Max Length8192
Transformers Version4.40.1
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|eot_id|>
Vocabulary Size128256
Torch Data Typebfloat16
ArliAI Llama 3 8B Instruct ORPO V0.1 (OwenArli/ArliAI-Llama-3-8B-Instruct-ORPO-v0.1)

Best Alternatives to ArliAI Llama 3 8B Instruct ORPO V0.1

Best Alternatives
Context / RAM
Downloads
Likes
...otron 8B UltraLong 4M Instruct4192K / 32.1 GB3284108
UltraLong Thinking4192K / 16.1 GB3672
...a 3.1 8B UltraLong 4M Instruct4192K / 32.1 GB17624
...a 3.1 8B UltraLong 2M Instruct2096K / 32.1 GB8759
...otron 8B UltraLong 2M Instruct2096K / 32.1 GB52615
Zero Llama 3.1 8B Beta61048K / 16.1 GB9581
...otron 8B UltraLong 1M Instruct1048K / 32.1 GB180845
...a 3.1 8B UltraLong 1M Instruct1048K / 32.1 GB138729
....1 1million Ctx Dark Planet 8B1048K / 32.3 GB902
...dger Nu Llama 3.1 8B UltraLong1048K / 16.2 GB303
Note: green Score (e.g. "73.2") means that the model is better than OwenArli/ArliAI-Llama-3-8B-Instruct-ORPO-v0.1.

Rank the ArliAI Llama 3 8B Instruct ORPO V0.1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 48023 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124