Phr00tyMix V2 32B by Phr00t

 »  All LLMs  »  Phr00t  »  Phr00tyMix V2 32B   URL Share it on

  Merged Model Base model:allura-org/qwen2.5-... Base model:arliai/qwq-32b-arli... Base model:delta-vector/hamana... Base model:huihui-ai/deepseek-... Base model:nbeerbower/eva-gute... Base model:sao10k/32b-qwen2.5-...   Conversational   Creative writing   Deepseek   Endpoints compatible   Qwen   Qwen2   Qwq   R1   Region:us   Roleplay   Rp   Safetensors   Sharded   Tensorflow

Phr00tyMix V2 32B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Phr00tyMix V2 32B Parameters and Internals

LLM NamePhr00tyMix V2 32B
Repository 🤗https://huggingface.co/Phr00t/Phr00tyMix-v2-32B 
Base Model(s)  QwQ 32B ArliAI RpR V4   ...1 Distill Qwen 32B Abliterated   allura-org/Qwen2.5-32b-RP-Ink   Hamanasu Magnum QwQ 32B   Sao10K/32B-Qwen2.5-Kunou-v1   nbeerbower/EVA-Gutenberg3-Qwen2.5-32B   ArliAI/QwQ-32B-ArliAI-RpR-v4   huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated   allura-org/Qwen2.5-32b-RP-Ink   Delta-Vector/Hamanasu-Magnum-QwQ-32B   Sao10K/32B-Qwen2.5-Kunou-v1   nbeerbower/EVA-Gutenberg3-Qwen2.5-32B
Merged ModelYes
Model Size32b
Required VRAM65.8 GB
Updated2026-03-01
MaintainerPhr00t
Model Typeqwen2
Model Files  5.0 GB: 1-of-14   5.0 GB: 2-of-14   4.9 GB: 3-of-14   4.9 GB: 4-of-14   4.9 GB: 5-of-14   4.9 GB: 6-of-14   4.9 GB: 7-of-14   4.9 GB: 8-of-14   4.9 GB: 9-of-14   4.9 GB: 10-of-14   4.9 GB: 11-of-14   4.9 GB: 12-of-14   4.9 GB: 13-of-14   1.9 GB: 14-of-14
Model ArchitectureQwen2ForCausalLM
Context Length131072
Model Max Length131072
Transformers Version4.52.4
Tokenizer ClassLlamaTokenizer
Padding Token<|end▁of▁sentence|>
Vocabulary Size152064
Torch Data Typebfloat16

Best Alternatives to Phr00tyMix V2 32B

Best Alternatives
Context / RAM
Downloads
Likes
Openbuddy Qwq 32B V24.2 200K195K / 65.8 GB13
Openbuddy Qwq 32B V24.1 200K195K / 65.8 GB63
Openbuddy Qwq 32B V25.2q 200K195K / 65.8 GB14
...y Qwen2.5coder 32B V24.1q 200K195K / 65.8 GB122
DeepSeek R1 Distill Qwen 32B128K / 65.7 GB17273271512
Qwen2.5 32B128K / 65.5 GB165710168
K2 Think128K / 65.8 GB3935360
Baichuan M2 32B128K / 65.8 GB6590396
RomboUltima 32B128K / 20.7 GB196
...wen2.5 32B Inst BaseMerge TIES128K / 65.8 GB7816
Note: green Score (e.g. "73.2") means that the model is better than Phr00t/Phr00tyMix-v2-32B.

Rank the Phr00tyMix V2 32B Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51621 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124