SWE Llama 13B by princeton-nlp

 ยป  All LLMs  ยป  princeton-nlp  ยป  SWE Llama 13B   URL Share it on

  Arxiv:2308.12950   Arxiv:2310.06770   Autotrain compatible   Endpoints compatible   Llama   Pytorch   Region:us   Sharded

SWE Llama 13B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
SWE Llama 13B (princeton-nlp/SWE-Llama-13b)
๐ŸŒŸ Advertise your project ๐Ÿš€

SWE Llama 13B Parameters and Internals

Model Type 
transformer
Additional Notes 
Model focuses on generating patches to resolve GitHub issues, conditioned on issue description and code context.
Supported Languages 
Python (Proficient)
Training Details 
Data Sources:
37 popular Python code repositories
Data Volume:
19,000 issues and pull requests
Methodology:
Fine-tuned only the attention matrices using LoRA method
Model Architecture:
Transformer, based on CodeLlama
LLM NameSWE Llama 13B
Repository ๐Ÿค—https://huggingface.co/princeton-nlp/SWE-Llama-13b 
Model Size13b
Required VRAM26 GB
Updated2025-08-18
Maintainerprinceton-nlp
Model Typellama
Model Files  9.9 GB: 1-of-3   9.9 GB: 2-of-3   6.2 GB: 3-of-3
Model ArchitectureLlamaForCausalLM
Context Length16384
Model Max Length16384
Transformers Version4.34.0.dev0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to SWE Llama 13B

Best Alternatives
Context / RAM
Downloads
Likes
Luminaura RP 13B128K / 26 GB60
Yarn Llama 2 13B 128K128K / 26 GB46112
Agent Llama2 13B 80K80K / 26.4 GB70
Chat Llama2 13B 80K80K / 52.8 GB80
LongAlign 13B 64K64K / 26 GB1613
LongAlign 13B 64K64K / 26 GB1113
LongAlign 13B 64K Base64K / 26 GB153
LongAlign 13B 64K Base64K / 26 GB63
Openbuddy Llama2 13B V15p1 64K64K / 26.1 GB44
Openbuddy Llama2 13b64k V1564K / 26.1 GB112
Note: green Score (e.g. "73.2") means that the model is better than princeton-nlp/SWE-Llama-13b.

Rank the SWE Llama 13B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50729 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124