Liberated Qwen1.5 72B by abacusai

 ยป  All LLMs  ยป  abacusai  ยป  Liberated Qwen1.5 72B   URL Share it on

  Autotrain compatible   Conversational   Dataset:abacusai/systemchat   Dataset:m-a-p/code-feedback Dataset:m-a-p/codefeedback-fil...   Dataset:teknium/openhermes-2.5   En   Endpoints compatible   Pytorch   Qwen2   Region:us   Sharded

Liberated Qwen1.5 72B Benchmarks

Liberated Qwen1.5 72B (abacusai/Liberated-Qwen1.5-72B)
๐ŸŒŸ Advertise your project ๐Ÿš€

Liberated Qwen1.5 72B Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
research, commercial applications
Limitations:
No guardrails or censorship
Considerations:
You are advised to implement your own alignment layer before exposing the model as a service.
Additional Notes 
Future releases will focus on mixing this dataset with datasets used to train Smaug to combine properties of both models.
Training Details 
Data Sources:
teknium/OpenHermes-2.5, m-a-p/Code-Feedback, m-a-p/CodeFeedback-Filtered-Instruction, abacusai/SystemChat
Methodology:
It took 3 days to train 3 epochs on 8x H100s using qLoRA, deepspeed zero-2, and Axolotl. learning rate 2e-4.
Context Length:
8000
Training Time:
3 days
Hardware Used:
8x H100 GPUs
Model Architecture:
Based on Qwen1.5 with sequence length fine-tuning
Input Output 
Input Format:
ChatML prompt format
Output Format:
JSON object
LLM NameLiberated Qwen1.5 72B
Repository ๐Ÿค—https://huggingface.co/abacusai/Liberated-Qwen1.5-72B 
Model Size72b
Required VRAM144.5 GB
Updated2025-08-23
Maintainerabacusai
Model Typeqwen2
Model Files  4.8 GB: 1-of-30   5.0 GB: 2-of-30   5.0 GB: 3-of-30   4.8 GB: 4-of-30   4.8 GB: 5-of-30   4.8 GB: 6-of-30   5.0 GB: 7-of-30   5.0 GB: 8-of-30   4.8 GB: 9-of-30   4.8 GB: 10-of-30   4.8 GB: 11-of-30   5.0 GB: 12-of-30   5.0 GB: 13-of-30   4.8 GB: 14-of-30   4.8 GB: 15-of-30   4.8 GB: 16-of-30   5.0 GB: 17-of-30   5.0 GB: 18-of-30   4.8 GB: 19-of-30   4.8 GB: 20-of-30   4.8 GB: 21-of-30   5.0 GB: 22-of-30   5.0 GB: 23-of-30   4.8 GB: 24-of-30   4.8 GB: 25-of-30   4.8 GB: 26-of-30   5.0 GB: 27-of-30   5.0 GB: 28-of-30   4.8 GB: 29-of-30   2.9 GB: 30-of-30
Supported Languagesen
Model ArchitectureQwen2ForCausalLM
Licenseother
Context Length32768
Model Max Length32768
Transformers Version4.39.0.dev0
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Torch Data Typefloat16
Errorsreplace

Quantized Models of the Liberated Qwen1.5 72B

Model
Likes
Downloads
VRAM
Liberated Qwen1.5 72B 4bit0542 GB
Liberated Qwen1.5 72B AWQ1441 GB
Liberated Qwen1.5 72B GGUF12528 GB

Best Alternatives to Liberated Qwen1.5 72B

Best Alternatives
Context / RAM
Downloads
Likes
...R10528DistillQwen 72B Preview3195K / 146 GB335
Kimi Dev 72B128K / 83.4 GB5745361
Qwen2.5 72B128K / 145.5 GB3115178
Homer V1.0 Qwen2.5 72B128K / 146.1 GB2196
Ultiima 72B128K / 146.1 GB1163
Ultiima 72B V1.5128K / 146.1 GB280
EVA Qwen2.5 72B V0.2128K / 146 GB34022
AceInstruct 72B128K / 146 GB13617
Qwen2 72B128K / 145.5 GB10738200
...n2.5 72B 2x Instruct TIES V1.0128K / 146.1 GB81
Note: green Score (e.g. "73.2") means that the model is better than abacusai/Liberated-Qwen1.5-72B.

Rank the Liberated Qwen1.5 72B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50856 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124