| LLM Name | Mxbai Rerank Large V2 |
| Repository ๐ค | https://huggingface.co/mixedbread-ai/mxbai-rerank-large-v2 |
| Model Size | 1.5b |
| Required VRAM | 3.1 GB |
| Updated | 2025-09-23 |
| Maintainer | mixedbread-ai |
| Model Type | qwen2 |
| Model Files | |
| Supported Languages | af am ar as az be bg bn br bs ca cs cy da de el en eo es et eu fa ff fi fr fy ga gd gl gn gu ha he hi hr ht hu hy id ig is it ja jv ka kk km kn ko ku ky la lg li ln lo lt lv mg mk ml mn mr ms my ne nl no ns om or pa pl ps pt qu rm ro ru sa sc sd si sk sl so sq sr ss su sv sw ta te th tl tn tr ug uk ur uz vi wo xh yi yo zh zu |
| Model Architecture | Qwen2ForCausalLM |
| License | apache-2.0 |
| Context Length | 32768 |
| Model Max Length | 32768 |
| Transformers Version | 4.49.0 |
| Tokenizer Class | Qwen2Tokenizer |
| Padding Token | <|endoftext|> |
| Vocabulary Size | 151936 |
| Torch Data Type | bfloat16 |
| Errors | replace |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| ReaderLM V2 | 500K / 3.1 GB | 17305 | 701 |
| Reader Lm 1.5B | 250K / 3.1 GB | 1411 | 607 |
| DeepSeek R1 Distill Qwen 1.5B | 128K / 3.5 GB | 602861 | 1342 |
| AceInstruct 1.5B | 128K / 3.5 GB | 76880 | 20 |
| ...n Research Reasoning Qwen 1.5B | 128K / 7.1 GB | 5716 | 221 |
| DeepScaleR 1.5B Preview | 128K / 7.1 GB | 11774 | 573 |
| Qwen2.5 1.5B | 128K / 3.1 GB | 261977 | 127 |
| OpenReasoning Nemotron 1.5B | 128K / 3.1 GB | 5188 | 47 |
| Qwen2 1.5B | 128K / 3.1 GB | 83734 | 97 |
| Stella En 1.5B V5 | 128K / 6.2 GB | 581890 | 211 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐