BeyondInfinity V2 4x7B AWQ by solidrust

 ยป  All LLMs  ยป  solidrust  ยป  BeyondInfinity V2 4x7B AWQ   URL Share it on

BeyondInfinity V2 4x7B AWQ is an open-source language model by solidrust. Features: 24.2b LLM, VRAM: 13GB, Context: 32K, MoE, Quantized, LLM Explorer Score: 0.14.

  4-bit   Autotrain compatible   Awq Base model:quantized:r136a1/be... Base model:r136a1/beyondinfini...   Endpoints compatible   Mixtral   Moe   Quantized   Region:us   Safetensors   Sharded   Tensorflow

BeyondInfinity V2 4x7B AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
BeyondInfinity V2 4x7B AWQ (solidrust/BeyondInfinity-v2-4x7B-AWQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

BeyondInfinity V2 4x7B AWQ Parameters and Internals

Model Type 
text-generation
Additional Notes 
AWQ supports faster Transformers-based inference with equivalent or better quality compared to GPTQ, using 4-bit quantization.
Input Output 
Input Format:
tokens
Accepted Modalities:
text
Output Format:
text
LLM NameBeyondInfinity V2 4x7B AWQ
Repository ๐Ÿค—https://huggingface.co/solidrust/BeyondInfinity-v2-4x7B-AWQ 
Base Model(s)  BeyondInfinity V2 4x7B   R136a1/BeyondInfinity-v2-4x7B
Model Size24.2b
Required VRAM13 GB
Updated2026-04-11
Maintainersolidrust
Model Typemixtral
Model Files  5.0 GB: 1-of-3   5.0 GB: 2-of-3   3.0 GB: 3-of-3
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureMixtralForCausalLM
Context Length32768
Model Max Length32768
Transformers Version4.41.0
Tokenizer ClassLlamaTokenizer
Padding Token<s>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to BeyondInfinity V2 4x7B AWQ

Best Alternatives
Context / RAM
Downloads
Likes
...r Dolphin Mixtral 4x7b DPO AWQ32K / 13 GB01
Everyone Coder 4x7b Base AWQ32K / 13 GB192
Beyonder 4x7B V2 AWQ32K / 13 GB63
Mixtralnt 4x7b Test AWQ32K / 13 GB71
Dzakwan MoE 4x7b Beta32K / 48.4 GB90720
Beyonder 4x7B V332K / 48.3 GB785660
Calme 4x7B MoE V0.232K / 48.3 GB81272
Mera Mix 4x7B32K / 48.3 GB815719
Calme 4x7B MoE V0.132K / 48.3 GB80032
MixtureofMerges MoE 4x7b V532K / 48.3 GB81171
Note: green Score (e.g. "73.2") means that the model is better than solidrust/BeyondInfinity-v2-4x7B-AWQ.

Rank the BeyondInfinity V2 4x7B AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52721 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a