Generated code may be inefficient or contain bugs., It is not an instruction model, so specific prompts like 'Write a function that computes the square root.' may not work well.
Considerations:
The model should be used considering it might produce output that requires human validation due to possible bugs or inefficiencies.
Additional Notes
Consider the ethical and attribution aspects when using outputs from this model.
Supported Languages
programming_languages_supported (
The model was trained on 17 programming languages from The Stack v2)
Training Details
Data Sources:
GitHub code, Arxiv, Wikipedia
Data Volume:
3.5+ trillion tokens
Methodology:
The model uses Grouped Query Attention and a Fill-in-the-Middle objective with a sliding window attention
Context Length:
16384
Hardware Used:
432 H100 GPUs
Model Architecture:
Transformer decoder with grouped-query and sliding window attention
Input Output
Input Format:
Programming code prompts.
Accepted Modalities:
text
Output Format:
Generated text/code snippets
Performance Tips:
Use context-rich prompts for better quality responses.
Note: green Score (e.g. "73.2") means that the model is better than TechxGenus/starcoder2-7b-AWQ.
Rank the Starcoder2 7B AWQ Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52721 in total.