| LLM Name | Mxbai Rerank Base V2 |
| Repository ๐ค | https://huggingface.co/mixedbread-ai/mxbai-rerank-base-v2 |
| Model Size | 494m |
| Required VRAM | 1 GB |
| Updated | 2025-11-02 |
| Maintainer | mixedbread-ai |
| Model Type | qwen2 |
| Model Files | |
| Supported Languages | af am ar as az be bg bn br bs ca cs cy da de el en eo es et eu fa ff fi fr fy ga gd gl gn gu ha he hi hr ht hu hy id ig is it ja jv ka kk km kn ko ku ky la lg li ln lo lt lv mg mk ml mn mr ms my ne nl no ns om or pa pl ps pt qu rm ro ru sa sc sd si sk sl so sq sr ss su sv sw ta te th tl tn tr ug uk ur uz vi wo xh yi yo zh zu |
| Model Architecture | Qwen2ForCausalLM |
| License | apache-2.0 |
| Context Length | 32768 |
| Model Max Length | 32768 |
| Transformers Version | 4.49.0 |
| Tokenizer Class | Qwen2Tokenizer |
| Padding Token | <|endoftext|> |
| Vocabulary Size | 151936 |
| Torch Data Type | bfloat16 |
| Errors | replace |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Qwen2 Sft Warmup | 128K / 0 GB | 6 | 0 |
| Qwen2 0.5Bchp 690K | 128K / 2 GB | 6 | 0 |
| Qwen2 0.5Bchp 570K | 128K / 2 GB | 6 | 0 |
| Qwen2 0.5Bchp 15K | 128K / 2 GB | 6 | 0 |
| Qwen2 0.5Bchp 300000 | 128K / 2 GB | 6 | 0 |
| Qwen2 0.5Bchp 20000 | 128K / 2 GB | 6 | 0 |
| Svig Tiny Step 3.7K | 128K / 2 GB | 5 | 0 |
| ...en2 Emotions Without Reasoning | 128K / 2 GB | 9 | 0 |
| Tinyqwen | 128K / 1 GB | 5 | 0 |
| Hamlet Merged | 32K / 2 GB | 23 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐