Quyen V0.1 Mlx 4bit by vilm

 ยป  All LLMs  ยป  vilm  ยป  Quyen V0.1 Mlx 4bit   URL Share it on

Quyen V0.1 Mlx 4bit is an open-source language model by vilm. Features: 945.9m LLM, VRAM: 2.8GB, Context: 32K, License: other, Quantized, LLM Explorer Score: 0.12.

  4bit   Conversational Dataset:argilla/distilabel-cap...   Dataset:intel/orca dpo pairs   Dataset:ldjnr/capybara   Dataset:teknium/openhermes-2.5   En   Endpoints compatible   Mlx   Quantized   Qwen2   Region:us   Safetensors

Quyen V0.1 Mlx 4bit Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Quyen V0.1 Mlx 4bit (vilm/Quyen-v0.1-mlx-4bit)
๐ŸŒŸ Advertise your project ๐Ÿš€

Quyen V0.1 Mlx 4bit Parameters and Internals

Model Type 
text-generation
Additional Notes 
This model was converted to MLX format from `vilm/Quyen-v0.1`. Refer to the [original model card](https://huggingface.co/vilm/Quyen-v0.1) for more details on the model.
LLM NameQuyen V0.1 Mlx 4bit
Repository ๐Ÿค—https://huggingface.co/vilm/Quyen-v0.1-mlx-4bit 
Model Size945.9m
Required VRAM2.8 GB
Updated2026-04-04
Maintainervilm
Model Typeqwen2
Model Files  2.8 GB
Supported Languagesen
Quantization Type4bit
Model ArchitectureQwen2ForCausalLM
Licenseother
Context Length32768
Model Max Length32768
Transformers Version4.37.2
Padding Token<|endoftext|>
Vocabulary Size151936
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Quyen V0.1 Mlx 4bit

Best Alternatives
Context / RAM
Downloads
Likes
Deep Ft14 Grp 16bit128K / 3.5 GB70
Deep Ft13 Grp 16bit128K / 3.5 GB50
Deep Ft11 Grp 16bit128K / 3.5 GB120
Deep Ft12 Grp 16bit128K / 3.5 GB50
Deep Ft9 Grp 16bit128K / 3.5 GB50
Deep Ft6 Grp 16bit128K / 3.5 GB50
Viper Coder V1.1 4bit32K / 8.3 GB131
Qwen2.5128K / 15.2 GB50
A.X 4.0128K / 144.4 GB539173
Palmyra Mini Thinking A128K / 3.5 GB40726
Note: green Score (e.g. "73.2") means that the model is better than vilm/Quyen-v0.1-mlx-4bit.

Rank the Quyen V0.1 Mlx 4bit Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52721 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a