| Model Type | |
| Use Cases |
| Areas: | |
| Applications: | | conversational AI, personal advice, text generation |
|
| Primary Use Cases: | | Creative writing, Personal assistant tasks |
|
| Limitations: | | Highly compliant; suitable for controlled environments., Not recommended to be used without custom alignment layers. |
|
| Considerations: | | Model is highly compliant, monitor for ethical use. |
|
|
| Additional Notes | | Ensure alignment layers are implemented when deploying models to handle responsive behavior responsibly. |
|
| Supported Languages | |
| Training Details |
| Data Sources: | | ehartford/dolphin, jondurbin/airoboros-2.2.1 |
|
| Methodology: | | Training for 4 epochs on uncensored, deduped, and quality filtered data. Included datasets increase creativity and empathy. |
|
| Training Time: | |
| Hardware Used: | |
| Model Architecture: | |
|
| Responsible Ai Considerations |
| Accountability: | | Content responsibility lies with the user. |
|
| Mitigation Strategies: | | Users are advised to implement own alignment layers. |
|
|
| Input Output |
| Input Format: | |
| Accepted Modalities: | |
| Output Format: | | Text generation in response to prompts |
|
| Performance Tips: | | Ensure suitable hardware for fast inference; utilize provided quantization options to optimize performance. |
|
|
| Release Notes |
| Notes: | | Checkpoint release to fix overfitting; includes empathy and improved conversation features. |
|
| Notes: | | Training sponsored by a16z, improvement in compliance and uncensored nature. Removed alignment and bias filtering from datasets. |
|
|
|