Incoder 6B by facebook

 ยป  All LLMs  ยป  facebook  ยป  Incoder 6B   URL Share it on

  Arxiv:2204.05999   Autotrain compatible   Code   Endpoints compatible   Javascript   Python   Pytorch   Region:us   Xglm
Model Card on HF ๐Ÿค—: https://huggingface.co/facebook/incoder-6B 

Incoder 6B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Incoder 6B (facebook/incoder-6B)
๐ŸŒŸ Advertise your project ๐Ÿš€

Incoder 6B Parameters and Internals

Model Type 
decoder-only, Transformer, code generation
Training Details 
Data Sources:
GitHub, GitLab, StackOverflow
Methodology:
Causal-masked objective
Model Architecture:
Decoder-only Transformer
Input Output 
Performance Tips:
For fine-tuning use full-precision model (float32). For inference use half-precision model (float16) which is more memory efficient and can be run on a 16 GB GPU.
LLM NameIncoder 6B
Repository ๐Ÿค—https://huggingface.co/facebook/incoder-6B 
Model Size6b
Required VRAM26.6 GB
Updated2025-08-23
Maintainerfacebook
Model Typexglm
Model Files  26.6 GB
Model ArchitectureXGLMForCausalLM
Licensecc-by-nc-4.0
Context Length2049
Model Max Length2049
Transformers Version4.18.0.dev0
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size50518
Torch Data Typefloat32
Activation Functiongelu

Rank the Incoder 6B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50856 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124