Neural-chat-7b-v3 can produce factually incorrect output, and may generate lewd, biased, or otherwise offensive outputs. Safety testing is recommended before deployment.
Supported Languages
en (high)
Training Details
Data Sources:
Open-Orca/SlimOrca, Intel/orca_dpo_pairs
Methodology:
Direct Performance Optimization (DPO)
Context Length:
8192
Hardware Used:
Intel Gaudi 2 processor (8 cards)
Input Output
Input Format:
Text
Accepted Modalities:
text
Output Format:
Text
Performance Tips:
Fine-tune the model for specific tasks to improve accuracy.
Release Notes
Version:
v3
Date:
October, 2023
Notes:
Improvement in model performance on various benchmarks. Better alignment using the Direct Performance Optimization (DPO) method.
Note: green Score (e.g. "73.2") means that the model is better than Intel/neural-chat-7b-v3.
Rank the Neural Chat 7B V3 Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52392 in total.