Stable Vicuna 13B GGUF by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Stable Vicuna 13B GGUF   URL Share it on

Stable Vicuna 13B GGUF is an open-source language model by TheBloke. Features: 13b LLM, VRAM: 5.4GB, License: cc-by-nc-sa-4.0, Quantized, LLM Explorer Score: 0.1.

  Arxiv:2302.13971 Base model:carperai/stable-vic... Base model:quantized:carperai/... Dataset:nomic-ai/gpt4all promp...   Dataset:openassistant/oasst1   Dataset:tatsu-lab/alpaca   En   Gguf   Llama   Quantized   Region:us

Stable Vicuna 13B GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Stable Vicuna 13B GGUF (TheBloke/stable-vicuna-13B-GGUF)
๐ŸŒŸ Advertise your project ๐Ÿš€

Stable Vicuna 13B GGUF Parameters and Internals

Model Type 
causal-lm, llama
Use Cases 
Areas:
text generation, conversational tasks
Limitations:
The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior.
Considerations:
Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly.
Additional Notes 
The model was fine-tuned using reinforcement learning from human feedback (RLHF) via Proximal Policy Optimization (PPO) on various conversational and instructional datasets.
Supported Languages 
English (fluent)
Training Details 
Data Sources:
OpenAssistant/oasst1, nomic-ai/gpt4all_prompt_generations, tatsu-lab/alpaca
Methodology:
fine-tuned using reinforcement learning from human feedback (RLHF) via Proximal Policy Optimization (PPO)
Context Length:
512
Model Architecture:
LLaMA transformer architecture
Input Output 
Input Format:
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
Accepted Modalities:
text
Output Format:
text
LLM NameStable Vicuna 13B GGUF
Repository ๐Ÿค—https://huggingface.co/TheBloke/stable-vicuna-13B-GGUF 
Model NameStable Vicuna 13B
Model CreatorCarperAI
Base Model(s)  Stable Vicuna 13B Delta   CarperAI/stable-vicuna-13b-delta
Model Size13b
Required VRAM5.4 GB
Updated2026-04-01
MaintainerTheBloke
Model Typellama
Model Files  5.4 GB   6.9 GB   6.3 GB   5.7 GB   7.4 GB   7.9 GB   7.4 GB   9.0 GB   9.2 GB   9.0 GB   10.7 GB   13.8 GB
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licensecc-by-nc-sa-4.0

Best Alternatives to Stable Vicuna 13B GGUF

Best Alternatives
Context / RAM
Downloads
Likes
MythoMax L2 13B GGUF0K / 5.4 GB59672224
Llama 2 13B Chat GGUF0K / 5.4 GB17331204
Llama 3 13B Instruct V0.1 GGUF0K / 5.1 GB13325
LLaMa 3 Base Zeroed 13B GGUF0K / 5 GB1261
Hermes 2 Pro Llama 3 13B GGUF0K / 4.6 GB761
...aMa 3 Instruct Zeroed 13B GGUF0K / 5 GB241
Llama3 13B Ku GGUF0K / 8.7 GB170
Model10K / 13.8 GB50
Codellama 7B Instruct GGUF0K / 2.8 GB2011
EstopianMaid 13B GGUF0K / 4.8 GB180756
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/stable-vicuna-13B-GGUF.

Rank the Stable Vicuna 13B GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52509 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a