Stablecode Completion Alpha 3B is an open-source language model by stabilityai. Features: 3b LLM, VRAM: 14.1GB, Context: 16K, License: apache-2.0, Code Generating, LLM Explorer Score: 0.1, HumanEval: 20.2.
Stablecode Completion Alpha 3B Parameters and Internals
Model Type
causal-lm
Use Cases
Areas:
code generation
Applications:
single/multiline code completion in multiple programming languages
Primary Use Cases:
code completion from a long context window
Limitations:
Not intended to create unlawful content or engage in activities with high risk of harm.
Additional Notes
The model uses the StarCoder tokenizer with a vocabulary size of 49,000.
Supported Languages
Code (supported)
Training Details
Data Sources:
bigcode/starcoderdata
Data Volume:
300B tokens for pre-training, additional 200B for fine-tuning
Methodology:
Pre-trained using a multi-stage context length extension schedule, first pre-training at a context length of 4096 tokens, then fine-tuning at 16384 tokens.
Context Length:
16384
Model Architecture:
Decoder-only transformer with parallel attention and MLP residuals, single input LayerNorm, rotary position embeddings, LayerNorm bias terms only.
Note: green Score (e.g. "73.2") means that the model is better than stabilityai/stablecode-completion-alpha-3b.
Rank the Stablecode Completion Alpha 3B Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52721 in total.