| Model Type |
| ||||||
| Input Output |
|
| LLM Name | Fireball Mistral Nemo Instruct 24B Merge V1 |
| Repository ๐ค | https://huggingface.co/EpistemeAI/Fireball-Mistral-Nemo-Instruct-24B-merge-v1 |
| Base Model(s) | |
| Merged Model | Yes |
| Model Size | 24b |
| Required VRAM | 24.6 GB |
| Updated | 2024-10-06 |
| Maintainer | EpistemeAI |
| Model Type | mistral |
| Instruction-Based | Yes |
| Model Files | |
| Gated Model | Yes |
| Model Architecture | MistralForCausalLM |
| License | proprietary |
| Context Length | 1024000 |
| Model Max Length | 1024000 |
| Transformers Version | 4.42.4 |
| Tokenizer Class | PreTrainedTokenizerFast |
| Padding Token | <pad> |
| Vocabulary Size | 131072 |
| Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Cydonia 24B V4.1 | 128K / 47.3 GB | 3471 | 61 |
| M3.2 24B Loki V1.3 | 128K / 47.3 GB | 1001 | 33 |
| Skyfall 31B V4 | 128K / 62.9 GB | 240 | 18 |
| ...2 PaintedFantasy Visage V3 34B | 128K / 68.4 GB | 168 | 22 |
| Cydonia 24B V4 | 128K / 47.3 GB | 857 | 34 |
| Codex 24B Small 3.2 | 128K / 47.3 GB | 544 | 54 |
| MS3.2 PaintedFantasy V2 24B | 128K / 47.3 GB | 217 | 30 |
| Cydonia R1 24B V4 | 128K / 47.3 GB | 169 | 29 |
| MS3.2 24B Magnum Diamond | 128K / 47.3 GB | 394 | 40 |
| Harbinger 24B | 128K / 47.3 GB | 551 | 67 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐