Llm Jp 13B DPO Lora Hh Rlhf Ja V1.1 by llm-jp

 ยป  All LLMs  ยป  llm-jp  ยป  Llm Jp 13B DPO Lora Hh Rlhf Ja V1.1   URL Share it on

  Arxiv:2305.18290   Dataset:llm-jp/hh-rlhf-12k-ja   En   Ja   Lora   Region:us   Safetensors

Llm Jp 13B DPO Lora Hh Rlhf Ja V1.1 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llm Jp 13B DPO Lora Hh Rlhf Ja V1.1 (llm-jp/llm-jp-13b-dpo-lora-hh_rlhf_ja-v1.1)
๐ŸŒŸ Advertise your project ๐Ÿš€

Llm Jp 13B DPO Lora Hh Rlhf Ja V1.1 Parameters and Internals

Model Type 
Transformer-based Language Model, text-generation
Use Cases 
Areas:
research, commercial applications
Applications:
Natural Language Processing
Primary Use Cases:
text generation
Limitations:
Not tuned for outputs that align with human intent and safety considerations.
Additional Notes 
Models released are in the early stages of research and development.
Supported Languages 
en (English), ja (Japanese)
Training Details 
Data Sources:
Wikipedia, mC4, The Pile, The Stack
Data Volume:
300B tokens
Methodology:
Direct Preference Optimization
Context Length:
2048
Hardware Used:
96 A100 40GB GPUs for pre-training, 8 A100 40GB GPUs for instruction tuning
Model Architecture:
Hugging Face Transformers
Input Output 
Accepted Modalities:
text
Output Format:
text
LLM NameLlm Jp 13B DPO Lora Hh Rlhf Ja V1.1
Repository ๐Ÿค—https://huggingface.co/llm-jp/llm-jp-13b-dpo-lora-hh_rlhf_ja-v1.1 
Model Size13b
Required VRAM0.8 GB
Updated2025-08-16
Maintainerllm-jp
Model Files  0.8 GB
Supported Languagesen ja
Model ArchitectureAutoModel
Licenseapache-2.0
Is Biasednone
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<pad|LLM-jp>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesc_proj|c_attn|c_fc
LoRA Alpha256
LoRA Dropout0.05
R Param128

Best Alternatives to Llm Jp 13B DPO Lora Hh Rlhf Ja V1.1

Best Alternatives
Context / RAM
Downloads
Likes
Nous Hermes Llama2 Llamafile0K /  GB2592
BimoGPT Llama2 13B0K / 0.6 GB07
Llama2 13B Chinese Chat0K / 0 GB039
PhysicsLlama 13B0K / 0 GB01
...fast Codellama 13B Instruct Hf0K / 13 GB11
...lama 2 13B Alpaca Spanish LoRA0K / 1.7 GB02
Medalpaca Lora 13B 8bit0K / 0.1 GB01
MythoMax L2 13B GGUF0K / 5.4 GB124181173
Llama 3 13B Instruct V0.1 GGUF0K / 5.1 GB12245
Hermes 2 Pro Llama 3 13B GGUF0K / 4.6 GB510
Note: green Score (e.g. "73.2") means that the model is better than llm-jp/llm-jp-13b-dpo-lora-hh_rlhf_ja-v1.1.

Rank the Llm Jp 13B DPO Lora Hh Rlhf Ja V1.1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50723 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124