Liberated Qwen1.5 14B 4bit by titan087

 ยป  All LLMs  ยป  titan087  ยป  Liberated Qwen1.5 14B 4bit   URL Share it on

  4-bit   4bit   Autotrain compatible   Conversational   Dataset:abacusai/systemchat   Dataset:m-a-p/code-feedback Dataset:m-a-p/codefeedback-fil...   Dataset:teknium/openhermes-2.5   En   Endpoints compatible   Gptq   Pytorch   Quantized   Qwen2   Region:us

Liberated Qwen1.5 14B 4bit Benchmarks

Liberated Qwen1.5 14B 4bit (titan087/Liberated-Qwen1.5-14B-4bit)
๐ŸŒŸ Advertise your project ๐Ÿš€

Liberated Qwen1.5 14B 4bit Parameters and Internals

Model Type 
AI assistant
Use Cases 
Areas:
AI assistance, Conversational agent
Applications:
Compliance to system prompts, Multiturn conversations
Primary Use Cases:
Long context interaction
Limitations:
No guardrails or censorship
Considerations:
Implement an alignment layer before public use.
Additional Notes 
Training config available: https://huggingface.co/abacusai/Liberated-Qwen1.5-72B/blob/main/configs/Liberated-Qwen-1.5-72b.qlora.yml
Supported Languages 
en (Proficient)
Training Details 
Data Sources:
teknium/OpenHermes-2.5, m-a-p/Code-Feedback, m-a-p/CodeFeedback-Filtered-Instruction, abacusai/SystemChat
Methodology:
Fine-tuned with qLoRA, using deepspeed zero-2.
Context Length:
32000
Training Time:
1 day
Hardware Used:
8x H100 GPUs
Model Architecture:
ChatML prompt format
Input Output 
Input Format:
ChatML
Accepted Modalities:
text
Output Format:
text in JSON format
Performance Tips:
Ensure compliance and safety filters.
LLM NameLiberated Qwen1.5 14B 4bit
Repository ๐Ÿค—https://huggingface.co/titan087/Liberated-Qwen1.5-14B-4bit 
Base Model(s)  abacusai/Liberated-Qwen1.5-14B   abacusai/Liberated-Qwen1.5-14B
Model Size14b
Required VRAM9.7 GB
Updated2025-08-19
Maintainertitan087
Model Typeqwen2
Model Files  9.7 GB
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq|4bit
Model ArchitectureQwen2ForCausalLM
Licenseother
Context Length32768
Model Max Length32768
Transformers Version4.39.0.dev0
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Torch Data Typefloat16
Errorsreplace

Best Alternatives to Liberated Qwen1.5 14B 4bit

Best Alternatives
Context / RAM
Downloads
Likes
Qwen2.5 14B Instruct GPTQ Int432K / 10 GB1069721
...5 Coder 14B Instruct GPTQ Int432K / 10 GB24465
Qwen2.5 14B Instruct GPTQ Int832K / 16.8 GB379821
...5 Coder 14B Instruct GPTQ Int832K / 16.8 GB10765
VinaLlama2 14B GPTQ Int432K / 9.7 GB42
Qwen1.5 14B Chat GPTQ Int432K / 9.9 GB121821
Qwen1.5 14B Chat GPTQ Int832K / 16.5 GB1311
Qwen2.5 14B Instruct 1M 8bit986K / 15.7 GB337
...B Instruct 1M Unsloth Bnb 4bit986K / 14.3 GB144
Qwen2.5 14B Instruct 1M 4bit986K / 8.3 GB182
Note: green Score (e.g. "73.2") means that the model is better than titan087/Liberated-Qwen1.5-14B-4bit.

Rank the Liberated Qwen1.5 14B 4bit Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50751 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124