LLaVA 13B Delta V0 by liuhaotian

 ยป  All LLMs  ยป  liuhaotian  ยป  LLaVA 13B Delta V0   URL Share it on

  Autotrain compatible   Llama   Pytorch   Region:us   Sharded

LLaVA 13B Delta V0 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
LLaVA 13B Delta V0 (liuhaotian/LLaVA-13b-delta-v0)
๐ŸŒŸ Advertise your project ๐Ÿš€

LLaVA 13B Delta V0 Parameters and Internals

Model Type 
Open-source chatbot, Auto-regressive language model, Transformer architecture
Use Cases 
Areas:
Research on large multimodal models and chatbots
Additional Notes 
This 'delta model' cannot be used directly. Users have to apply it on top of the original LLaMA weights to get actual LLaVA weights.
Training Details 
Data Sources:
CC3M, GPT-generated multimodal instruction
Data Volume:
595K filtered image-text pairs, 150K GPT-generated data
Training Time:
April 2023
Model Architecture:
Transformer architecture
LLM NameLLaVA 13B Delta V0
Repository ๐Ÿค—https://huggingface.co/liuhaotian/LLaVA-13b-delta-v0 
Model Size13b
Required VRAM26 GB
Updated2025-08-26
Maintainerliuhaotian
Model Typellama
Model Files  0.0 GB   9.9 GB: 1-of-3   9.9 GB: 2-of-3   6.2 GB: 3-of-3
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Model Max Length2048
Transformers Version4.28.0.dev0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32003
Torch Data Typefloat16

Best Alternatives to LLaVA 13B Delta V0

Best Alternatives
Context / RAM
Downloads
Likes
Luminaura RP 13B128K / 26 GB60
Yarn Llama 2 13B 128K128K / 26 GB207112
Agent Llama2 13B 80K80K / 26.4 GB50
Chat Llama2 13B 80K80K / 52.8 GB70
LongAlign 13B 64K64K / 26 GB5113
LongAlign 13B 64K64K / 26 GB1113
LongAlign 13B 64K Base64K / 26 GB463
LongAlign 13B 64K Base64K / 26 GB63
Openbuddy Llama2 13B V15p1 64K64K / 26.1 GB34
Openbuddy Llama2 13b64k V1564K / 26.1 GB62
Note: green Score (e.g. "73.2") means that the model is better than liuhaotian/LLaVA-13b-delta-v0.

Rank the LLaVA 13B Delta V0 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50900 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124