Fireball Mistral Nemo Base 2407 V1 DPO2 by EpistemeAI

 ยป  All LLMs  ยป  EpistemeAI  ยป  Fireball Mistral Nemo Base 2407 V1 DPO2   URL Share it on

  Autotrain compatible Base model:epistemeai/fireball... Base model:finetune:epistemeai...   En   Endpoints compatible   Mistral   Region:us   Safetensors   Sharded   Tensorflow   Trl   Unsloth

Fireball Mistral Nemo Base 2407 V1 DPO2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
๐ŸŒŸ Advertise your project ๐Ÿš€

Fireball Mistral Nemo Base 2407 V1 DPO2 Parameters and Internals

Model Type 
text-generation-inference, transformers, mistral, trl
Additional Notes 
This model was fine-tuned from EpistemeAI/Fireball-Nemo-Base-2407-sft-v1.
Supported Languages 
en (proficient)
Training Details 
Methodology:
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
LLM NameFireball Mistral Nemo Base 2407 V1 DPO2
Repository ๐Ÿค—https://huggingface.co/EpistemeAI/Fireball-Mistral-Nemo-Base-2407-v1-DPO2 
Base Model(s)  EpistemeAI/Fireball-Nemo-Base-2407-sft-v1   EpistemeAI/Fireball-Nemo-Base-2407-sft-v1
Model Size12.2b
Required VRAM24.5 GB
Updated2025-06-09
MaintainerEpistemeAI
Model Typemistral
Model Files  4.9 GB: 1-of-5   4.9 GB: 2-of-5   4.9 GB: 3-of-5   4.9 GB: 4-of-5   4.9 GB: 5-of-5
Supported Languagesen
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length1024000
Model Max Length1024000
Transformers Version4.44.0
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<pad>
Vocabulary Size131072
Torch Data Typebfloat16
Fireball Mistral Nemo Base 2407 V1 DPO2 (EpistemeAI/Fireball-Mistral-Nemo-Base-2407-v1-DPO2)

Best Alternatives to Fireball Mistral Nemo Base 2407 V1 DPO2

Best Alternatives
Context / RAM
Downloads
Likes
...aptain Eris Violet GRPO V0.4201000K / 24.5 GB12522
...ish Mistral Nemo Instruct 24071000K / 24.5 GB772
Violet Twilight V0.21000K / 24.5 GB15932
Mistral Nemo Kurdish1000K / 24.5 GB293
Mergekit Nuslerp Fybnaus1000K / 24.5 GB370
Goldenworking1000K / 24.5 GB60
MMRExCEV GRPO V0.4201000K / 24.5 GB122
Mergekit Nuslerp Sahgtwn1000K / 24.5 GB90
Mergekit Model Stock Lvezkfe1000K / 24.5 GB90
MMR E11000K / 24.5 GB41
Note: green Score (e.g. "73.2") means that the model is better than EpistemeAI/Fireball-Mistral-Nemo-Base-2407-v1-DPO2.

Rank the Fireball Mistral Nemo Base 2407 V1 DPO2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 48023 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124