Sarashina2 might generate meaningless sequences, inaccurate instances, or biased/objectionable outputs.
Supported Languages
ja (unknown), en (unknown)
Training Details
Data Sources:
Common Crawl corpus, SlimPajama
Data Volume:
1T Japanese tokens
Model Architecture:
Llama2
Responsible Ai Considerations
Mitigation Strategies:
Sarashina2 has not been tuned to follow an instruction yet. Developers are advised to tune models based on human preferences and safety considerations.
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52721 in total.