Japanese Stablelm Instruct Alpha 7B V2 is an open-source language model by stabilityai. Features: 7b LLM, VRAM: 42.1GB, Context: 2K, License: apache-2.0, Instruction-Based, LLM Explorer Score: 0.1.
Japanese Stablelm Instruct Alpha 7B V2 Parameters and Internals
Model Type
auto-regressive, decoder-only, causal-lm
Use Cases
Areas:
open-source community, chat-like applications
Limitations:
Potential biases and toxicity in generated responses
Considerations:
Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use responsibly.
Additional Notes
Utilizes the v1 version of the novelai-tokenizer for effective Japanese and English text processing. Contributions from the EleutherAI Polyglot-JA team and Stable Community Japan significantly impacted data collection for training.
Supported Languages
Japanese (fluent)
Training Details
Data Sources:
Japanese translation of the Databricks Dolly-15k dataset, Japanese translation of the subset of the Anthropic HH dataset, Wikinews subset of the izumi-lab/llm-japanese-dataset
Rank the Japanese Stablelm Instruct Alpha 7B V2 Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52509 in total.