Tanuki 8B DPO V1.0 by weblab-GENIAC

 ยป  All LLMs  ยป  weblab-GENIAC  ยป  Tanuki 8B DPO V1.0   URL Share it on

  Autotrain compatible   Conversational   En   Endpoints compatible   Ja   Japanese   Llama   Region:us   Safetensors   Sharded   Tensorflow

Tanuki 8B DPO V1.0 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
๐ŸŒŸ Advertise your project ๐Ÿš€

Tanuki 8B DPO V1.0 Parameters and Internals

Model Type 
Large Language Model, Text Generation
Additional Notes 
Multiple quantized versions like AWQ 4bit, GPTQ 4bit, and 8bit, etc.
Supported Languages 
languages_supported_and_proficiency_levels (Japanese and English)
Training Details 
Data Volume:
1.3 trillion tokens
Methodology:
SFT and DPO for dialogue adjustment
Input Output 
Input Format:
Japanese version of Alpaca prompt format
Performance Tips:
The model recommends using the default system prompt.
LLM NameTanuki 8B DPO V1.0
Repository ๐Ÿค—https://huggingface.co/weblab-GENIAC/Tanuki-8B-dpo-v1.0 
Model Size8b
Required VRAM15 GB
Updated2025-06-09
Maintainerweblab-GENIAC
Model Typellama
Model Files  5.0 GB: 1-of-4   4.9 GB: 2-of-4   4.6 GB: 3-of-4   0.5 GB: 4-of-4
Supported Languagesja en
Model ArchitectureLlamaForCausalLM
Licenseapache-2.0
Context Length8192
Model Max Length8192
Transformers Version4.43.3
Tokenizer ClassPreTrainedTokenizerFast
Padding Token[PAD]
Vocabulary Size65024
Torch Data Typebfloat16
Tanuki 8B DPO V1.0 (weblab-GENIAC/Tanuki-8B-dpo-v1.0)

Best Alternatives to Tanuki 8B DPO V1.0

Best Alternatives
Context / RAM
Downloads
Likes
...otron 8B UltraLong 4M Instruct4192K / 32.1 GB3284108
UltraLong Thinking4192K / 16.1 GB3672
...a 3.1 8B UltraLong 4M Instruct4192K / 32.1 GB17624
...a 3.1 8B UltraLong 2M Instruct2096K / 32.1 GB8759
...otron 8B UltraLong 2M Instruct2096K / 32.1 GB52615
Zero Llama 3.1 8B Beta61048K / 16.1 GB9581
...otron 8B UltraLong 1M Instruct1048K / 32.1 GB180845
...a 3.1 8B UltraLong 1M Instruct1048K / 32.1 GB138729
...xis Bookwriter Llama3.1 8B Sft1048K / 16.1 GB634
....1 1million Ctx Dark Planet 8B1048K / 32.3 GB902
Note: green Score (e.g. "73.2") means that the model is better than weblab-GENIAC/Tanuki-8B-dpo-v1.0.

Rank the Tanuki 8B DPO V1.0 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 48046 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124