Japanese Stablelm 3B 4e1t Instruct is an open-source language model by stabilityai. Features: 3b LLM, VRAM: 5.6GB, Context: 4K, License: apache-2.0, Instruction-Based, LLM Explorer Score: 0.1.
Japanese Stablelm 3B 4e1t Instruct Parameters and Internals
Model Type
text-generation, causal-lm
Use Cases
Areas:
Research, Commercial applications
Primary Use Cases:
As a foundational model for application-specific fine-tuning
Limitations:
Potential for generating offensive or inappropriate content.
Considerations:
Exercise caution in production systems to prevent harm or distress.
Supported Languages
Japanese (native)
Training Details
Data Sources:
Japanese translation of the Databricks Dolly-15k dataset, Japanese translation of the subset of the Anthropic HH dataset, Wikinews subset of the izumi-lab/llm-japanese-dataset
Methodology:
Fine-tuned on instruction-following datasets.
Context Length:
4096
Model Architecture:
Decoder-only transformer with modifications similar to LLaMA.
Note: green Score (e.g. "73.2") means that the model is better than stabilityai/japanese-stablelm-3b-4e1t-instruct.
Rank the Japanese Stablelm 3B 4e1t Instruct Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52758 in total.