Japanese Stablelm Instruct Alpha 7B V2 by stabilityai

 ยป  All LLMs  ยป  stabilityai  ยป  Japanese Stablelm Instruct Alpha 7B V2   URL Share it on

  Autotrain compatible   Custom code   Instruct   Ja   Japanese-stablelm   Region:us   Safetensors   Sharded   Tensorflow

Japanese Stablelm Instruct Alpha 7B V2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Japanese Stablelm Instruct Alpha 7B V2 (stabilityai/japanese-stablelm-instruct-alpha-7b-v2)
๐ŸŒŸ Advertise your project ๐Ÿš€

Japanese Stablelm Instruct Alpha 7B V2 Parameters and Internals

Model Type 
auto-regressive, decoder-only, causal-lm
Use Cases 
Areas:
open-source community, chat-like applications
Limitations:
Potential biases and toxicity in generated responses
Considerations:
Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use responsibly.
Additional Notes 
Utilizes the v1 version of the novelai-tokenizer for effective Japanese and English text processing. Contributions from the EleutherAI Polyglot-JA team and Stable Community Japan significantly impacted data collection for training.
Supported Languages 
Japanese (fluent)
Training Details 
Data Sources:
Japanese translation of the Databricks Dolly-15k dataset, Japanese translation of the subset of the Anthropic HH dataset, Wikinews subset of the izumi-lab/llm-japanese-dataset
Context Length:
1024
Model Architecture:
NeoX transformer
LLM NameJapanese Stablelm Instruct Alpha 7B V2
Repository ๐Ÿค—https://huggingface.co/stabilityai/japanese-stablelm-instruct-alpha-7b-v2 
Model Size7b
Required VRAM42.1 GB
Updated2025-09-23
Maintainerstabilityai
Instruction-BasedYes
Model Files  10.0 GB: 1-of-3   10.0 GB: 2-of-3   8.1 GB: 3-of-3   10.0 GB: 1-of-2   4.0 GB: 2-of-2   7.6 GB
Supported Languagesja
Model ArchitectureJapaneseStableLMAlphaForCausalLM
Licenseapache-2.0
Context Length2048
Model Max Length2048
Transformers Version4.33.2
Vocabulary Size65535
Torch Data Typefloat32

Rank the Japanese Stablelm Instruct Alpha 7B V2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51539 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124