CodeLlama 7B Instruct Hf En CMP TR Size 304 Epochs 10 2024 06 23 10 41 40 3558636 by vdavidr

 ยป  All LLMs  ยป  vdavidr  ยป  CodeLlama 7B Instruct Hf En CMP TR Size 304 Epochs 10 2024 06 23 10 41 40 3558636   URL Share it on

  Adapter Base model:adapter:codellama/c... Base model:codellama/codellama...   Codegen   Finetuned   Generated from trainer   Instruct   Lora   Peft   Region:us   Safetensors   Tensorboard

CodeLlama 7B Instruct Hf En CMP TR Size 304 Epochs 10 2024 06 23 10 41 40 3558636 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
CodeLlama 7B Instruct Hf En CMP TR Size 304 Epochs 10 2024 06 23 10 41 40 3558636 (vdavidr/CodeLlama-7b-Instruct-hf_En__CMP_TR_size_304_epochs_10_2024-06-23_10-41-40_3558636)
๐ŸŒŸ Advertise your project ๐Ÿš€

CodeLlama 7B Instruct Hf En CMP TR Size 304 Epochs 10 2024 06 23 10 41 40 3558636 Parameters and Internals

Additional Notes 
The model is a fine-tuned version with hyperparameters such as learning_rate: 0.001, train_batch_size: 1, eval_batch_size: 1, seed: 3407, distributed_type: multi-GPU, optimizer: Adam, lr_scheduler_type: linear, and lr_scheduler_warmup_steps: 304. The framework versions include PEFT 0.7.1, Transformers 4.37.0, Pytorch 2.2.1+cu121, Datasets 2.20.0, Tokenizers 0.15.2.
LLM NameCodeLlama 7B Instruct Hf En CMP TR Size 304 Epochs 10 2024 06 23 10 41 40 3558636
Repository ๐Ÿค—https://huggingface.co/vdavidr/CodeLlama-7b-Instruct-hf_En__CMP_TR_size_304_epochs_10_2024-06-23_10-41-40_3558636 
Base Model(s)  CodeLlama 7B Instruct Hf   codellama/CodeLlama-7b-Instruct-hf
Model Size7b
Required VRAM0.6 GB
Updated2025-07-31
Maintainervdavidr
Instruction-BasedYes
Model Files  0.6 GB   0.0 GB
Generates CodeYes
Model ArchitectureAdapter
Licensellama2
Is Biasednone
Tokenizer ClassCodeLlamaTokenizer
Padding Token</s>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesv_proj|k_proj|o_proj|q_proj|gate_proj|down_proj|up_proj
LoRA Alpha16
LoRA Dropout0.1
R Param16

Rank the CodeLlama 7B Instruct Hf En CMP TR Size 304 Epochs 10 2024 06 23 10 41 40 3558636 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50835 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124