Japanese GPT Neox 3.6B Instruction Sft V2 by rinna

 Β»  All LLMs  Β»  rinna  Β»  Japanese GPT Neox 3.6B Instruction Sft V2   URL Share it on

Japanese GPT Neox 3.6B Instruction Sft V2 is an open-source language model by rinna. Features: 3.6b LLM, VRAM: 7.4GB, Context: 2K, License: mit, Instruction-Based, LLM Explorer Score: 0.09.

  Arxiv:2404.01657   Autotrain compatible Base model:finetune:rinna/japa... Base model:rinna/japanese-gpt-...   Dataset:anthropic/hh-rlhf   Dataset:stanfordnlp/shp   Gpt neox   Instruct   Ja   Lm   Pytorch   Region:us   Safetensors

Japanese GPT Neox 3.6B Instruction Sft V2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Japanese GPT Neox 3.6B Instruction Sft V2 (rinna/japanese-gpt-neox-3.6b-instruction-sft-v2)
🌟 Advertise your project πŸš€

Japanese GPT Neox 3.6B Instruction Sft V2 Parameters and Internals

Model Type 
text generation
Supported Languages 
Japanese (Advanced)
Training Details 
Data Sources:
Anthropic/hh-rlhf, stanfordnlp/SHP
Methodology:
Finetuning using translated subsets of datasets.
Model Architecture:
A 36-layer, 2816-hidden-size transformer-based language model.
Input Output 
Input Format:
A special format for constructing inputs as a conversation between 'ユーアー' and 'システム'.
LLM NameJapanese GPT Neox 3.6B Instruction Sft V2
Repository πŸ€—https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft-v2 
Base Model(s)  rinna/japanese-gpt-neox-3.6b   rinna/japanese-gpt-neox-3.6b
Model Size3.6b
Required VRAM7.4 GB
Updated2025-11-15
Maintainerrinna
Model Typegpt_neox
Instruction-BasedYes
Model Files  7.4 GB   7.4 GB
Supported Languagesja
Model ArchitectureGPTNeoXForCausalLM
Licensemit
Context Length2048
Model Max Length2048
Tokenizer ClassT5Tokenizer
Padding Token[PAD]
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Japanese GPT Neox 3.6B Instruction Sft V2

Best Alternatives
Context / RAM
Downloads
Likes
...rrowSmartPlus 3.6B Instruction2K / 14.3 GB11
...rtPlus 3.6B Instant Sft JHSVer2K / 14.3 GB21
... GPT Neox 3.6B Instruction Sft2K / 7.4 GB9602105
... Large Lm 3.6B Instruction Sft2K / 7.2 GB55027
... GPT Neox 3.6B Instruction Ppo2K / 7.4 GB168573
...n Sft 4bit 128g Actorder False2K / 2.1 GB42
...tion Sft 8bit 1g Actorder True2K / 2.8 GB42
Note: green Score (e.g. "73.2") means that the model is better than rinna/japanese-gpt-neox-3.6b-instruction-sft-v2.

Rank the Japanese GPT Neox 3.6B Instruction Sft V2 Capabilities

πŸ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52721 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum β€” our secure, self-hosted AI agent for server management.
Release v20260328a