| Model Type | 
 | |||
| Additional Notes | 
 | |||
| Supported Languages | 
 | |||
| Input Output | 
 | 
| LLM Name | Japanese Large Lm 3.6B Instruction Sft 8bit 1g Actorder True | 
| Repository ๐ค | https://huggingface.co/line-corporation/japanese-large-lm-3.6b-instruction-sft-8bit-1g-actorder_True | 
| Model Size | 3.6b | 
| Required VRAM | 2.8 GB | 
| Updated | 2025-09-23 | 
| Maintainer | line-corporation | 
| Model Type | gpt_neox | 
| Instruction-Based | Yes | 
| Model Files | |
| Supported Languages | ja | 
| Quantization Type | 8bit | 
| Model Architecture | GPTNeoXForCausalLM | 
| License | apache-2.0 | 
| Context Length | 2048 | 
| Model Max Length | 2048 | 
| Transformers Version | 4.33.0 | 
| Vocabulary Size | 51200 | 
| Torch Data Type | float16 | 
| Best Alternatives | Context / RAM | Downloads | Likes | 
|---|---|---|---|
| ...n Sft 4bit 128g Actorder False | 2K / 2.1 GB | 6 | 2 | 
| ...rrowSmartPlus 3.6B Instruction | 2K / 14.3 GB | 5 | 1 | 
| ...rtPlus 3.6B Instant Sft JHSVer | 2K / 14.3 GB | 0 | 1 | 
| ... Large Lm 3.6B Instruction Sft | 2K / 7.2 GB | 1772 | 26 | 
| ... GPT Neox 3.6B Instruction Sft | 2K / 7.4 GB | 2133 | 105 | 
| ... GPT Neox 3.6B Instruction Ppo | 2K / 7.4 GB | 1685 | 73 | 
| ...T Neox 3.6B Instruction Sft V2 | 2K / 7.4 GB | 1732 | 26 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐