7B by CausalLM

 ยป  All LLMs  ยป  CausalLM  ยป  7B   URL Share it on

  Autotrain compatible   Causallm   Dataset:baai/coig   Dataset:fnlp/moss-003-sft-data Dataset:garage-baind/open-plat... Dataset:jondurbin/airoboros-3.... Dataset:josephuscheung/guanaco...   Dataset:ldjnr/puffin Dataset:liuhaotian/llava-instr...   Dataset:liwu/mnbvc   Dataset:meta-math/metamathqa Dataset:milashkaarshif/moegirl...   Dataset:open-orca/openorca   Dataset:openbmb/llava zh   Dataset:ryokoai/fandom23k   Dataset:ryokoai/sharegpt52k   Dataset:stingning/ultrachat   Dataset:teknium/openhermes Dataset:tigerresearch/tigerbot...   Dataset:wiki lingua   Dataset:wikipedia Dataset:wizardlm/wizardlm evol...   En   Endpoints compatible   Instruct   Llama   Llama2   Pytorch   Qwen   Region:us   Sharded   Zh
Model Card on HF ๐Ÿค—: https://huggingface.co/CausalLM/7B 

7B Benchmarks

๐ŸŒŸ Advertise your project ๐Ÿš€

7B Parameters and Internals

Model Type 
text generation, causallm
Use Cases 
Areas:
research, commercial applications
Limitations:
May produce hallucinations or unreliable outputs., Trained on unfiltered internet data, potentially contains objectionable content.
Considerations:
Users should implement safety checks and filter keywords in outputs.
Additional Notes 
The 7B version is a distilled version of the 14B model.
Supported Languages 
en (English: high proficiency), zh (Chinese: high proficiency)
Training Details 
Data Sources:
JosephusCheung/GuanacoDataset, Open-Orca/OpenOrca, stingning/ultrachat, meta-math/MetaMathQA, liuhaotian/LLaVA-Instruct-150K, jondurbin/airoboros-3.1, WizardLM/WizardLM_evol_instruct_V2_196k, RyokoAI/ShareGPT52K, RyokoAI/Fandom23K, milashkaarshif/MoeGirlPedia_wikitext_raw_archive, wikipedia, wiki_lingua, fnlp/moss-003-sft-data, garage-bAInd/Open-Platypus, LDJnr/Puffin, openbmb/llava_zh, BAAI/COIG, TigerResearch/tigerbot-zhihu-zh-10k, liwu/MNBVC, teknium/openhermes
Data Volume:
1.3B tokens
Methodology:
Manually curated SFT dataset, synthetic data generation using larger language models.
Model Architecture:
Same as LLaMA2 with original MHA LLaMA2 models; no additional scaling for RoPE.
Input Output 
Input Format:
chatml
Accepted Modalities:
text
Output Format:
text
Performance Tips:
Avoid unofficial GPTQ and AWQ models; prefer GGUF for quantization.
LLM Name7B
Repository ๐Ÿค—https://huggingface.co/CausalLM/7B 
Model Size7b
Required VRAM15.5 GB
Updated2025-06-09
MaintainerCausalLM
Model Typellama
Instruction-BasedYes
Model Files  10.0 GB: 1-of-2   5.5 GB: 2-of-2
Supported Languagesen zh
Model ArchitectureLlamaForCausalLM
Licensewtfpl
Context Length8192
Model Max Length8192
Transformers Version4.35.0.dev0
Tokenizer ClassGPT2Tokenizer
Vocabulary Size151936
Torch Data Typebfloat16
7B (CausalLM/7B)

Quantized Models of the 7B

Model
Likes
Downloads
VRAM
CausalLM 7B GGUF6113693 GB
CausalLM 7B GPTQ5365 GB
CausalLM 7B AWQ3245 GB

Best Alternatives to 7B

Best Alternatives
Context / RAM
Downloads
Likes
A3.41024K / 16.1 GB130
1241024K / 16.1 GB930
A5.41024K / 16.1 GB120
A2.41024K / 16.1 GB120
... Qwen2.5llamaify 7B V23.1 200K195K / 15.2 GB15984
SuperNeuralDreadDevil 8B128K / 16.1 GB701
Falcon3 7B Instruct32K / 14.8 GB4379071
Falcon3 Jessi V0.4 7B Slerp32K / 14.9 GB109
Jessi V0.4 Falcon3 7B Instruct32K / 14.8 GB60
Jessi V0.6 Falcon3 7B Instruct32K / 14.8 GB90
Note: green Score (e.g. "73.2") means that the model is better than CausalLM/7B.

Rank the 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 48023 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124