20B tokens for ALMA-7B models, 12B tokens for ALMA-13B models
Methodology:
Two-step fine-tuning: initial monolingual fine-tuning followed by parallel data optimization, with additional LoRA fine-tuning for ALMA-R models using Contrastive Preference Optimization
Note: green Score (e.g. "73.2") means that the model is better than haoranxu/ALMA-7B-Pretrain.
Rank the ALMA 7B Pretrain Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52509 in total.