Phi 3 Small 128K Instruct is an open-source language model by microsoft. Features: 7.4b LLM, VRAM: 14.8GB, Context: 128K, License: mit, Instruction-Based, LLM Explorer Score: 0.26.
Publicly available documents, Newly created synthetic data, High quality chat format data
Data Volume:
4.8T tokens (including 10% multilingual)
Methodology:
Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO)
Context Length:
128000
Training Time:
18 days
Hardware Used:
1024 H100-80G GPUs
Model Architecture:
Dense decoder-only Transformer model with alternating dense and blocksparse attentions
Responsible Ai Considerations
Fairness:
Models are trained primarily on English text. Languages other than English and English language varieties with less representation might experience worse performance.
Mitigation Strategies:
Safety post-training was performed, though limitations may still be present.
Note: green Score (e.g. "73.2") means that the model is better than microsoft/Phi-3-small-128k-instruct.
Rank the Phi 3 Small 128K Instruct Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52721 in total.