Japanese GPT Neox 3.6B Instruction Ppo by rinna

 ยป  All LLMs  ยป  rinna  ยป  Japanese GPT Neox 3.6B Instruction Ppo   URL Share it on

Japanese GPT Neox 3.6B Instruction Ppo is an open-source language model by rinna. Features: 3.6b LLM, VRAM: 7.4GB, Context: 2K, License: mit, Instruction-Based, LLM Explorer Score: 0.09.

  Arxiv:1707.06347   Arxiv:2203.02155   Arxiv:2404.01657   Autotrain compatible Base model:finetune:rinna/japa... Base model:rinna/japanese-gpt-...   Dataset:anthropic/hh-rlhf   Gpt neox   Instruct   Ja   Lm   Pytorch   Region:us   Safetensors

Japanese GPT Neox 3.6B Instruction Ppo Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Japanese GPT Neox 3.6B Instruction Ppo (rinna/japanese-gpt-neox-3.6b-instruction-ppo)
๐ŸŒŸ Advertise your project ๐Ÿš€

Japanese GPT Neox 3.6B Instruction Ppo Parameters and Internals

Model Type 
text-generation, lm, nlp
Additional Notes 
The PPO model tends to generate repeated text more often than its SFT counterpart.
Supported Languages 
Japanese (Excellent)
Training Details 
Data Sources:
Anthropic/hh-rlhf
Methodology:
Supervised Fine-Tuning (SFT) and Reinforcement Learning using PPO based on RLHF
Model Architecture:
36-layer, 2816-hidden-size transformer-based
Input Output 
Input Format:
A special conversational format is expected, ending with 'ใ‚ทใ‚นใƒ†ใƒ : ' to prompt model response.
Accepted Modalities:
text
Performance Tips:
Set `repetition_penalty=1.1` for better generation performance.
LLM NameJapanese GPT Neox 3.6B Instruction Ppo
Repository ๐Ÿค—https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo 
Base Model(s)  rinna/japanese-gpt-neox-3.6b   rinna/japanese-gpt-neox-3.6b
Model Size3.6b
Required VRAM7.4 GB
Updated2025-10-03
Maintainerrinna
Model Typegpt_neox
Instruction-BasedYes
Model Files  7.4 GB   7.4 GB
Supported Languagesja
Model ArchitectureGPTNeoXForCausalLM
Licensemit
Context Length2048
Model Max Length2048
Tokenizer ClassT5Tokenizer
Padding Token[PAD]
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Japanese GPT Neox 3.6B Instruction Ppo

Best Alternatives
Context / RAM
Downloads
Likes
...rrowSmartPlus 3.6B Instruction2K / 14.3 GB11
...rtPlus 3.6B Instant Sft JHSVer2K / 14.3 GB21
... GPT Neox 3.6B Instruction Sft2K / 7.4 GB9602105
... Large Lm 3.6B Instruction Sft2K / 7.2 GB55027
...T Neox 3.6B Instruction Sft V22K / 7.4 GB80526
...n Sft 4bit 128g Actorder False2K / 2.1 GB42
...tion Sft 8bit 1g Actorder True2K / 2.8 GB42
Note: green Score (e.g. "73.2") means that the model is better than rinna/japanese-gpt-neox-3.6b-instruction-ppo.

Rank the Japanese GPT Neox 3.6B Instruction Ppo Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52721 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a