CodeLlama 34B Python GGUF by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  CodeLlama 34B Python GGUF   URL Share it on

CodeLlama 34B Python GGUF is an open-source language model by TheBloke. Features: 34b LLM, VRAM: 14.2GB, License: llama2, Quantized, Code Generating, LLM Explorer Score: 0.1.

  Arxiv:2308.12950   Code   Codegen   Gguf   Llama   Llama2   Quantized   Region:us

CodeLlama 34B Python GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
CodeLlama 34B Python GGUF (TheBloke/CodeLlama-34B-Python-GGUF)
๐ŸŒŸ Advertise your project ๐Ÿš€

CodeLlama 34B Python GGUF Parameters and Internals

Model Type 
llama
Additional Notes 
This model is designed for general code synthesis and understanding, specifically for Python.
LLM NameCodeLlama 34B Python GGUF
Repository ๐Ÿค—https://huggingface.co/TheBloke/CodeLlama-34B-Python-GGUF 
Model NameCodeLlama 34B Python
Model CreatorMeta
Base Model(s)  codellama/CodeLlama-34b-python-hf   codellama/CodeLlama-34b-python-hf
Model Size34b
Required VRAM14.2 GB
Updated2026-04-01
MaintainerTheBloke
Model Typellama
Model Files  14.2 GB   17.8 GB   16.3 GB   14.6 GB   19.1 GB   20.2 GB   19.1 GB   23.2 GB   23.8 GB   23.2 GB   27.7 GB   35.9 GB
Supported Languagescode
GGUF QuantizationYes
Quantization Typegguf
Generates CodeYes
Model ArchitectureAutoModel
Licensellama2

Best Alternatives to CodeLlama 34B Python GGUF

Best Alternatives
Context / RAM
Downloads
Likes
CodeLlama 34B Instruct GGUF0K / 14.2 GB4648107
Phind CodeLlama 34B V2 GGUF0K / 14.2 GB2944170
CodeLlama 34B Instruct Hf GGUF0K / 12.5 GB1801
CodeLlama 34B Hf GGUF0K / 12.5 GB1293
CodeLlama 34B Python Hf GGUF0K / 12.5 GB591
...allistic CodeLlama 34B V1 GGUF0K / 35.9 GB380
CodeLlama 34B GGUF0K / 14.2 GB149755
...d CodeLlama 34B Python V1 GGUF0K / 14.2 GB127813
...chless Codellama 34B V2.0 GGUF0K / 14.2 GB28510
CodeFuse CodeLlama 34B GGUF0K / 14.2 GB43220
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/CodeLlama-34B-Python-GGUF.

Rank the CodeLlama 34B Python GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52392 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a