The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. It may produce incorrect or irrelevant responses, and users must critically evaluate the output.
Training Details
Data Sources:
internet text data
Model Architecture:
GPTNeoXForCausalLM
Input Output
Performance Tips:
For better performance, it's recommended to run the model on a machine with GPUs.
Note: green Score (e.g. "73.2") means that the model is better than diegomiranda/text-to-cypher.
Rank the Text To Cypher Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52473 in total.