| Model Type | | auto-regressive language model, transformer decoder |
|
| Use Cases |
| Areas: | | Research, Commercial applications |
|
| Applications: | | Code generation, Code completion |
|
| Primary Use Cases: | | Generate code snippets given some context |
|
| Limitations: | | No assurance of code functionality, Might produce suboptimal, buggy, or exploitable code |
|
|
| Supported Languages | | Python (Complete), Java (Complete), JavaScript (Complete), Rust (Complete), C++ (Complete), C (Complete), C# (Complete), Go (Complete) |
|
| Training Details |
| Data Sources: | |
| Methodology: | | Fill-in-the-Middle training objective |
|
| Context Length: | |
| Model Architecture: | | variable Grouped Query Attention, Rotary Position Embeddings |
|
|
| Input Output |
| Input Format: | |
| Accepted Modalities: | |
| Output Format: | |
| Performance Tips: | | Measured for maximal batch size on the device |
|
|