Pygmalion 7B 4bit 128g Cuda 2048Token is an open-source language model by DanielAWrightGabrielAI. Features: 7b LLM, VRAM: 4GB, Context: 2K, Quantized, LLM Explorer Score: 0.07.
Pygmalion 7B 4bit 128g Cuda 2048Token Parameters and Internals
Model Type
text generation, conversational
Use Cases
Areas:
fictional conversation, entertainment
Limitations:
Not fine-tuned to be safe and harmless, Contains profanity and lewd or offensive texts, May produce socially unacceptable or undesirable text, Outputs might often be factually wrong or misleading
Additional Notes
The model was trained on the usual Pygmalion persona + chat format.
Training Details
Methodology:
Fine-tuned using a subset of data from Pygmalion-6B-v8-pt4
Input Output
Input Format:
[CHARACTER]'s Persona: [A few sentences about the character you want the model to play] [DIALOGUE HISTORY] You: [User's input message here] [CHARACTER]:
Note: green Score (e.g. "73.2") means that the model is better than DanielAWrightGabrielAI/pygmalion-7b-4bit-128g-cuda-2048Token.
Rank the Pygmalion 7B 4bit 128g Cuda 2048Token Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52473 in total.