Phi 1 is an open-source language model by microsoft. Features: 1.4b LLM, VRAM: 2.8GB, Context: 2K, License: mit, LLM Explorer Score: 0.15, HumanEval: 51.2.
research, not suitable for production coding tasks
Applications:
basic Python coding
Primary Use Cases:
code generation, coding assistance
Limitations:
limited to training data packages, may replicate scripts, can generate inaccurate code, unreliable with non-code formats, limited natural language comprehension
Additional Notes
Security risks include directory traversal, injection attacks, misunderstanding requirements, lack of input validation, insecure defaults, and failure in error handling.
Supported Languages
en (high)
Training Details
Data Sources:
The Stack v1.2, StackOverflow, code_contests, synthetic Python textbooks and exercises
Data Volume:
54B tokens (7B unique tokens)
Training Time:
6 days
Hardware Used:
8 A100 GPUs
Model Architecture:
Transformer-based model with next-word prediction objective
Input Output
Input Format:
Python code format with comments for generation
Accepted Modalities:
code
Output Format:
Python code
Performance Tips:
Users should manually verify all API uses if using packages other than the ones included in training set.
Note: green Score (e.g. "73.2") means that the model is better than microsoft/phi-1.
Rank the Phi 1 Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52721 in total.