Model Type |
| |||||||||
Additional Notes |
| |||||||||
Training Details |
| |||||||||
Input Output |
|
LLM Name | Mistral NeMo Minitron 8B Chat |
Repository ๐ค | https://huggingface.co/rasyosef/Mistral-NeMo-Minitron-8B-Chat |
Base Model(s) | |
Model Size | 8b |
Required VRAM | 16.8 GB |
Updated | 2025-09-08 |
Maintainer | rasyosef |
Model Type | mistral |
Model Files | |
Model Architecture | MistralForCausalLM |
License | other |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.44.2 |
Tokenizer Class | PreTrainedTokenizerFast |
Padding Token | <|im_end|> |
Vocabulary Size | 131074 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Ministral 8B Instruct 2410 HF | 32K / 32 GB | 47 | 10 |
Ministrations 8B V1 | 32K / 16.1 GB | 9 | 21 |
Minerva 8B | 32K / 14.5 GB | 6 | 0 |
Ministral 8B Slerp | 32K / 29.2 GB | 7 | 0 |
The Omega Directive M 8B V1.0 | 32K / 16.1 GB | 23 | 8 |
Mistral Pro 8B V0.1 | 32K / 17.9 GB | 1586 | 66 |
Deeplm Mistral V0.1 8B Stage1 | 32K / 14.5 GB | 12 | 1 |
DeepOpus 1 8B Preview | 32K / 16.1 GB | 75 | 2 |
DeepNeo 1 8B Preview | 32K / 16.1 GB | 63 | 2 |
Forgotten Abomination 8B V2.2 | 32K / 16.1 GB | 6 | 2 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐