CodeFuse 13B GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  CodeFuse 13B GPTQ   URL Share it on

  4-bit   Autotrain compatible Base model:codefuse-ai/codefus... Base model:quantized:codefuse-...   Gpt neox   Gptq   Quantized   Region:us   Safetensors

CodeFuse 13B GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
๐ŸŒŸ Advertise your project ๐Ÿš€

CodeFuse 13B GPTQ Parameters and Internals

Model Type 
gptneox
Use Cases 
Areas:
code generation
Supported Languages 
en (English), zh (Chinese)
Training Details 
Data Sources:
1000B token code, Chinese, and English data covering over 40 programming languages.
Methodology:
Model was fine-tuned on the CodeFuse-Evol-instruction-66k dataset.
Context Length:
4096
Model Architecture:
GPT-NeoX
Input Output 
Input Format:
<|role_start|>system<|role_end|>{system_message} <|role_start|>human<|role_end|>{prompt} <|role_start|>bot<|role_end|>
Accepted Modalities:
text
LLM NameCodeFuse 13B GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/CodeFuse-13B-GPTQ 
Model NameCodefuse 13B
Model CreatorCodeFuse AI
Base Model(s)  CodeFuse 13B   codefuse-ai/CodeFuse-13B
Model Size13b
Required VRAM8.6 GB
Updated2025-06-09
MaintainerTheBloke
Model Typegpt_neox
Model Files  8.6 GB
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureGPTNeoXForCausalLM
Licenseother
Context Length4096
Model Max Length4096
Transformers Version4.34.0
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size100831
Torch Data Typefloat16
CodeFuse 13B GPTQ (TheBloke/CodeFuse-13B-GPTQ)

Best Alternatives to CodeFuse 13B GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
CodeFuse 13B4K / 54.6 GB3049
Sarashina1 13B2K / 26.3 GB11840
Polyglot Ko Kullm V2 Fix2K / 51.7 GB15720
Pythia 13B Deduped Green Devil2K / 23.9 GB223810
KORani V1 13B2K / 51.8 GB197
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/CodeFuse-13B-GPTQ.

Rank the CodeFuse 13B GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 48046 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124