WangchanLion7B by airesearch

 ยป  All LLMs  ยป  airesearch  ยป  WangchanLion7B   URL Share it on

  Merged Model   Autotrain compatible   Custom code Dataset:cognitivecomputations/... Dataset:databricks/databricks-... Dataset:garage-baind/open-plat...   Dataset:hello-simpleai/hc3   Dataset:iapp wiki qa squad   Dataset:laion/oig   Dataset:muennighoff/xp3x Dataset:openai/summarize from ... Dataset:pythainlp/han-instruct...   Dataset:scb mt enth 2020   Dataset:thaisum   En   Endpoints compatible   Instruct   Mpt   Pytorch   Region:us   Sharded   Th

WangchanLion7B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
WangchanLion7B (airesearch/WangchanLion7B)
๐ŸŒŸ Advertise your project ๐Ÿš€

WangchanLion7B Parameters and Internals

Model Type 
Multilingual, Instruction-following, MPT architecture
Use Cases 
Areas:
Research, Commercial Applications
Applications:
Reading Comprehension, Brainstorming, Creative Writing
Primary Use Cases:
Instruction-following tasks
Limitations:
Math problems, Reasoning, Factfulness
Considerations:
Users should be aware of biases and limitations.
Additional Notes 
The model focuses on transparency regarding data, code, and processes. Finetuning datasets are open and commercially permissible.
Supported Languages 
primary (Thai), secondary (English), additional ()
Training Details 
Data Sources:
laion/OIG, databricks/databricks-dolly-15k, thaisum, scb_mt_enth_2020, garage-bAInd/Open-Platypus, iapp_wiki_qa_squad, pythainlp/han-instruct-dataset-v1.0, cognitivecomputations/dolphin, Hello-SimpleAI/HC3, Muennighoff/xP3x, openai/summarize_from_feedback
Methodology:
Instruction-finetuned, QLoRA with 4 A100 GPUs
Model Architecture:
MPT
LLM NameWangchanLion7B
Repository ๐Ÿค—https://huggingface.co/airesearch/WangchanLion7B 
Merged ModelYes
Required VRAM29.8 GB
Updated2025-10-02
Maintainerairesearch
Model Typempt
Instruction-BasedYes
Model Files  4.2 GB: 1-of-33   0.8 GB: 2-of-33   0.8 GB: 3-of-33   0.8 GB: 4-of-33   0.8 GB: 5-of-33   0.8 GB: 6-of-33   0.8 GB: 7-of-33   0.8 GB: 8-of-33   0.8 GB: 9-of-33   0.8 GB: 10-of-33   0.8 GB: 11-of-33   0.8 GB: 12-of-33   0.8 GB: 13-of-33   0.8 GB: 14-of-33   0.8 GB: 15-of-33   0.8 GB: 16-of-33   0.8 GB: 17-of-33   0.8 GB: 18-of-33   0.8 GB: 19-of-33   0.8 GB: 20-of-33   0.8 GB: 21-of-33   0.8 GB: 22-of-33   0.8 GB: 23-of-33   0.8 GB: 24-of-33   0.8 GB: 25-of-33   0.8 GB: 26-of-33   0.8 GB: 27-of-33   0.8 GB: 28-of-33   0.8 GB: 29-of-33   0.8 GB: 30-of-33   0.8 GB: 31-of-33   0.8 GB: 32-of-33   0.8 GB: 33-of-33
Supported Languagesth en
Model ArchitectureMPTForCausalLM
Licenseapache-2.0
Transformers Version4.34.1
Vocabulary Size256000
Torch Data Typefloat32

Best Alternatives to WangchanLion7B

Best Alternatives
Context / RAM
Downloads
Likes
Replit Code Instruct Glaive0K / 10.4 GB888

Rank the WangchanLion7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51535 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124