Neural Chat 7B V3.2 by Intel

 ยป  All LLMs  ยป  Intel  ยป  Neural Chat 7B V3.2   URL Share it on

  Arxiv:2309.12284   Autotrain compatible   Dataset:meta-math/metamathqa   En   Endpoints compatible   Finetuned   Intel   Math   Mistral   Model-index   Pytorch   Region:us   Sharded

Neural Chat 7B V3.2 Benchmarks

Neural Chat 7B V3.2 (Intel/neural-chat-7b-v3-2)
๐ŸŒŸ Advertise your project ๐Ÿš€

Neural Chat 7B V3.2 Parameters and Internals

Model Type 
Large Language Model, Fine-tuned Model
Use Cases 
Areas:
Research, Commercial Applications
Primary Use Cases:
Language-related tasks
Limitations:
Not suitable for hostile or alienating environments
Additional Notes 
Neural-chat-7b-v3-2 should not be relied on for factually accurate information. Users should be aware of the risks and limitations of the model.
Supported Languages 
en (English)
Training Details 
Data Sources:
meta-math/MetaMathQA
Methodology:
Direct Performance Optimization (DPO), fine-tuning
Context Length:
8192
Hardware Used:
Intel Gaudi 2 processor
Model Architecture:
Fine-tuning from mistralai/Mistral-7B-v0.1
Safety Evaluation 
Methodologies:
Community feedback
Risk Categories:
Misinformation, Bias
Ethical Considerations:
The model may generate lewd, biased, or offensive outputs.
Input Output 
Input Format:
Text-based prompt
Accepted Modalities:
text
Output Format:
Text response
Release Notes 
Version:
v3-2
Date:
December, 2023
Notes:
Fine-tuned with Direct Performance Optimization.
LLM NameNeural Chat 7B V3.2
Repository ๐Ÿค—https://huggingface.co/Intel/neural-chat-7b-v3-2 
Model Size7b
Required VRAM14.4 GB
Updated2025-09-23
MaintainerIntel
Model Typemistral
Model Files  9.9 GB: 1-of-2   4.5 GB: 2-of-2
Supported Languagesen
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.34.0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Quantized Models of the Neural Chat 7B V3.2

Model
Likes
Downloads
VRAM
Neural Chat 7B V3.2 GPTQ9194 GB
Neural Chat 7B V3.2 GGUF92093 GB
Neural Chat 7B V3.2 AWQ2394 GB

Best Alternatives to Neural Chat 7B V3.2

Best Alternatives
Context / RAM
Downloads
Likes
...Nemo Instruct 2407 Abliterated1000K / 24.5 GB14118
MegaBeam Mistral 7B 512K512K / 14.4 GB890850
SpydazWeb AI HumanAI RP512K / 14.4 GB171
SpydazWeb AI HumanAI 002512K / 14.4 GB181
...daz Web AI ChatML 512K Project512K / 14.5 GB120
MegaBeam Mistral 7B 300K282K / 14.4 GB377916
MegaBeam Mistral 7B 300K282K / 14.4 GB808216
Hebrew Mistral 7B 200K256K / 30 GB131615
Astral 256K 7B V2250K / 14.4 GB50
Astral 256K 7B250K / 14.4 GB50
Note: green Score (e.g. "73.2") means that the model is better than Intel/neural-chat-7b-v3-2.

Rank the Neural Chat 7B V3.2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51544 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124