generate inaccurate code and facts, limited scope for code, unreliable responses to instruction, language limitations, potential societal biases, toxicity
Additional Notes
Phi-1.5 is best suited for prompts using the QA format, the chat format, and the code format. It may produce irrelevant text following the main answer.
Supported Languages
en (standard)
Training Details
Data Sources:
same data sources as phi-1, augmented with a new data source that consists of various NLP synthetic texts
Data Volume:
150B tokens
Training Time:
8 days
Hardware Used:
32xA100-40G
Model Architecture:
Transformer-based model with next-word prediction objective
Note: green Score (e.g. "73.2") means that the model is better than ElxsiGenwizards/tinyllama-quant.
Rank the Tinyllama Quant Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52721 in total.