Mxbai Rerank Large V2 by mixedbread-ai

 ยป  All LLMs  ยป  mixedbread-ai  ยป  Mxbai Rerank Large V2   URL Share it on

  Arxiv:2506.03487   Af   Am   Ar   As   Autotrain compatible   Az   Be   Bg   Bn   Br   Bs   Ca   Cs   Cy   Da   De   El   En   Endpoints compatible   Eo   Es   Et   Eu   Fa   Ff   Fi   Fr   Fy   Ga   Gd   Gl   Gn   Gu   Ha   He   Hi   Hr   Ht   Hu   Hy   Id   Ig   Is   It   Ja   Jv   Ka   Kk   Km   Kn   Ko   Ku   Ky   La   Lg   Li   Ln   Lo   Lt   Lv   Mg   Mk   Ml   Mn   Mr   Ms   My   Ne   Nl   No   Ns   Om   Or   Pa   Pl   Ps   Pt   Qu   Qwen2   Region:us   Rm   Ro   Ru   Sa   Safetensors   Sc   Sd   Si   Sk   Sl   So   Sq   Sr   Ss   Su   Sv   Sw   Ta   Te   Text-ranking   Th   Tl   Tn   Tr   Ug   Uk   Ur   Uz   Vi   Wo   Xh   Yi   Yo   Zh   Zu

Mxbai Rerank Large V2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Mxbai Rerank Large V2 (mixedbread-ai/mxbai-rerank-large-v2)
๐ŸŒŸ Advertise your project ๐Ÿš€

Mxbai Rerank Large V2 Parameters and Internals

LLM NameMxbai Rerank Large V2
Repository ๐Ÿค—https://huggingface.co/mixedbread-ai/mxbai-rerank-large-v2 
Model Size1.5b
Required VRAM3.1 GB
Updated2025-07-10
Maintainermixedbread-ai
Model Typeqwen2
Model Files  3.1 GB
Supported Languagesaf am ar as az be bg bn br bs ca cs cy da de el en eo es et eu fa ff fi fr fy ga gd gl gn gu ha he hi hr ht hu hy id ig is it ja jv ka kk km kn ko ku ky la lg li ln lo lt lv mg mk ml mn mr ms my ne nl no ns om or pa pl ps pt qu rm ro ru sa sc sd si sk sl so sq sr ss su sv sw ta te th tl tn tr ug uk ur uz vi wo xh yi yo zh zu
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.49.0
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Mxbai Rerank Large V2

Best Alternatives
Context / RAM
Downloads
Likes
ReaderLM V2500K / 3.1 GB15489664
Reader Lm 1.5B250K / 3.1 GB2130600
DeepSeek R1 Distill Qwen 1.5B128K / 3.5 GB10864091260
...n Research Reasoning Qwen 1.5B128K / 3.5 GB13521173
DeepScaleR 1.5B Preview128K / 7.1 GB157750561
Qwen2.5 1.5B128K / 3.1 GB485373107
AceInstruct 1.5B128K / 3.5 GB622420
DeepCoder 1.5B Preview128K / 7.1 GB408966
Qwen2 1.5B128K / 3.1 GB9075394
Stella En 1.5B V5128K / 6.2 GB581890211
Note: green Score (e.g. "73.2") means that the model is better than mixedbread-ai/mxbai-rerank-large-v2.

Rank the Mxbai Rerank Large V2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 49496 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124