Pretrained using causal language modeling with NeMo Megatron GPT implementation. The instruct models were finetuned on instruction data using chat and raw text formats.
Model Architecture:
Large decoder-only pretrained transformer language models.
Safety Evaluation
Risk Categories:
bias, safety
Ethical Considerations:
Potential for generating biased, incorrect, or harmful content; overrepresentation of some viewpoints.
Responsible Ai Considerations
Fairness:
Potential bias due to diverse or non-diverse training data.
Transparency:
Increased communication and transparency sought through a modified RAIL license.
Mitigation Strategies:
Encouragement for open communication and feedback collection from indirect users.
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52721 in total.