CodeLlama 34B Instruct GGUF is an open-source language model by TheBloke. Features: 34b LLM, VRAM: 14.2GB, License: llama2, Quantized, Instruction-Based, Code Generating, LLM Explorer Score: 0.12.
CodeLlama 34B Instruct GGUF Parameters and Internals
Model Type
llama
Use Cases
Areas:
commercial and research use in English and relevant programming languages
Applications:
general code synthesis and understanding
Limitations:
Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
Considerations:
Before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Additional Notes
Training all 9 Code Llama models required 400K GPU hours of computation. Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Metaβs sustainability program.
Training Details
Methodology:
Code Llama is an auto-regressive language model that uses an optimized transformer architecture.
Hardware Used:
Metaβs Research Super Cluster.
Model Architecture:
auto-regressive language model with optimized transformer architecture.
π Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! π
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52392 in total.