Bailong Orpo 7B by INX-TEXT

 »  All LLMs  »  INX-TEXT  »  Bailong Orpo 7B   URL Share it on

Bailong Orpo 7B is an open-source language model by INX-TEXT. Features: 7b LLM, VRAM: 14GB, Context: 4K, License: llama2, Quantized, Instruction-Based, LLM Explorer Score: 0.14.

  Arxiv:2304.08177   Arxiv:2403.07691   Arxiv:2404.00862 Base model:inx-text/bailong-in... Base model:quantized:inx-text/...   Conversational   Endpoints compatible   Gguf   Instruct   Llama   Orpo   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Bailong Orpo 7B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Bailong Orpo 7B Parameters and Internals

Model Type 
decoder-only transformer architecture
Additional Notes 
The model is trained with context length of 2048 tokens and the training dataset is primarily composed of Traditional Chinese data with a minor portion of English.
Supported Languages 
Traditional Chinese (high), English (medium)
Training Details 
Methodology:
Specially, motivated by the Chinese-LLaMA paper, we implemented QLoRA during the secondary pretraining stage to train the model, as opposed to the standard full-parameter training method. This approach significantly reduces the computational cost while achieving satisfactory model performance simultaneously.
Context Length:
2048
Model Architecture:
Bailong 7B is an autogressive language model with 7B parameters and decoder-only transformer architecture, derived from implementing secondary pretraining on Llama 2 7B with tied embedding and expanded vocabulary.
LLM NameBailong Orpo 7B
Repository 🤗https://huggingface.co/INX-TEXT/Bailong-orpo-7B 
Base Model(s)  INX-TEXT/Bailong-instruct-7B   INX-TEXT/Bailong-instruct-7B
Model Size7b
Required VRAM14 GB
Updated2026-01-16
MaintainerINX-TEXT
Model Typellama
Instruction-BasedYes
Model Files  13.9 GB   5.0 GB: 1-of-3   5.0 GB: 2-of-3   4.0 GB: 3-of-3
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length4096
Model Max Length4096
Transformers Version4.38.0
Tokenizer ClassLlamaTokenizer
Vocabulary Size59241
Torch Data Typebfloat16

Best Alternatives to Bailong Orpo 7B

Best Alternatives
Context / RAM
Downloads
Likes
Lucie 7B Instruct31K / 13.4 GB411520
Lucie 7B Instruct Human Data31K / 13.4 GB5327
Lucie 7B Instruct Gguf31K / 4.1 GB15684
Sqlcoder 7B 216K / 13.5 GB46192426
Sql Code Gguf16K / 4.8 GB310
...pseek Coder 6.7B Instruct GGUF16K / 2.5 GB16689
... 7B Instruct Preview Reasoning4K / 14 GB493
Latxa 7B Instruct4K / 13.5 GB70
...lumiX 32K Instruct Q4 K M GGUF32K / 4.1 GB613
...p 0.05 Max Grad1.0 Grad Accu3232K / 14.4 GB70
Note: green Score (e.g. "73.2") means that the model is better than INX-TEXT/Bailong-orpo-7B.

Rank the Bailong Orpo 7B Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 53232 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a