Vicuna 7B V1.5 16K GGUF by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Vicuna 7B V1.5 16K GGUF   URL Share it on

  Arxiv:2306.05685   Arxiv:2307.09288 Base model:lmsys/vicuna-7b-v1.... Base model:quantized:lmsys/vic...   Gguf   Llama   Quantized   Region:us

Vicuna 7B V1.5 16K GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Vicuna 7B V1.5 16K GGUF (TheBloke/vicuna-7B-v1.5-16K-GGUF)
๐ŸŒŸ Advertise your project ๐Ÿš€

Vicuna 7B V1.5 16K GGUF Parameters and Internals

Model Type 
auto-regressive, language model
Use Cases 
Areas:
research on large language models, chatbots
Additional Notes 
Vicuna is fine-tuned from Llama 2 on user-shared conversations collected from ShareGPT.
Training Details 
Data Volume:
125K conversations
Methodology:
supervised instruction fine-tuning and linear RoPE scaling
Context Length:
16000
Model Architecture:
transformer
Input Output 
Input Format:
USER: {prompt} ASSISTANT:
LLM NameVicuna 7B V1.5 16K GGUF
Repository ๐Ÿค—https://huggingface.co/TheBloke/vicuna-7B-v1.5-16K-GGUF 
Model NameVicuna 7B v1.5 16K
Model Creatorlmsys
Base Model(s)  lmsys/vicuna-7b-v1.5-16k   lmsys/vicuna-7b-v1.5-16k
Model Size7b
Required VRAM2.8 GB
Updated2025-08-18
MaintainerTheBloke
Model Typellama
Model Files  2.8 GB   3.6 GB   3.3 GB   3.0 GB   3.8 GB   4.1 GB   3.9 GB   4.7 GB   4.8 GB   4.7 GB   5.5 GB   7.2 GB
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licensellama2

Best Alternatives to Vicuna 7B V1.5 16K GGUF

Best Alternatives
Context / RAM
Downloads
Likes
Pixel8K / 4.4 GB170
Mistral 7B Instruct V0.3 GGUF0K / 1.6 GB172963108
Conversely Mistral 7B0K / 0.2 GB220
Qwen2 7B Instruct GGUF0K / 1.9 GB14730211
WizardLM 2 7B GGUF0K / 2.7 GB14837482
CleverBoi 7B V20K / 0.1 GB550
...hemeng Qwen Math 7b 24 1 100 10K / 15.2 GB130
Mistral 7B Instruct V0.2 GGUF0K / 3.1 GB82194453
Mistral 7B Instruct V0.3 GGUF0K / 2.7 GB2626210
Qwen2 7B Instruct V0.6 GGUF0K / 4.5 GB135220
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/vicuna-7B-v1.5-16K-GGUF.

Rank the Vicuna 7B V1.5 16K GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50738 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124