Suzume Llama 3 8B Multilingual by lightblue

 ยป  All LLMs  ยป  lightblue  ยป  Suzume Llama 3 8B Multilingual   URL Share it on

  Arxiv:2405.12612   Autotrain compatible Base model:finetune:meta-llama... Base model:meta-llama/meta-lla...   Conversational   Endpoints compatible   Generated from trainer   Instruct   Llama   Pytorch   Region:us   Safetensors   Sharded   Tensorflow

Suzume Llama 3 8B Multilingual Benchmarks

Suzume Llama 3 8B Multilingual (lightblue/suzume-llama-3-8B-multilingual)
๐ŸŒŸ Advertise your project ๐Ÿš€

Suzume Llama 3 8B Multilingual Parameters and Internals

Model Type 
multilingual, text generation
Use Cases 
Areas:
Research, Commercial applications
Applications:
Multilingual chat applications
Primary Use Cases:
Multilingual conversational AI
Limitations:
Excludes certain problem categories for Russian, Not yet fully evaluated
Considerations:
Ongoing evaluation and feedback encouraged
Additional Notes 
Ongoing development with future releases
Supported Languages 
languages_supported (multilingual), proficiency_levels (high)
Training Details 
Data Sources:
lightblue/tagengo-gpt4, lmsys/lmsys-chat-1m, megagonlabs/instruction_ja, openchat/openchat_sharegpt4_dataset
Data Volume:
90,000 multilingual conversations
Methodology:
finetuning
Context Length:
8192
Training Time:
2.5 hours
Hardware Used:
4 x A100 (80GB) GPUs
Model Architecture:
LlamaForCausalLM
Input Output 
Input Format:
Prompt messages should be constructed in a chat format
Accepted Modalities:
text
Output Format:
Text generation in response format
Performance Tips:
Utilize vLLM for optimal inference speed
LLM NameSuzume Llama 3 8B Multilingual
Repository ๐Ÿค—https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual 
Base Model(s)  Meta Llama 3 8B Instruct   meta-llama/Meta-Llama-3-8B-Instruct
Model Size8b
Required VRAM16.1 GB
Updated2025-09-10
Maintainerlightblue
Model Typellama
Instruction-BasedYes
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4   16.1 GB
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length8192
Model Max Length8192
Transformers Version4.38.2
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|end_of_text|>
Vocabulary Size128256
Torch Data Typebfloat16

Quantized Models of the Suzume Llama 3 8B Multilingual

Model
Likes
Downloads
VRAM
Suzume Llama 3 8B Multilingual084 GB

Best Alternatives to Suzume Llama 3 8B Multilingual

Best Alternatives
Context / RAM
Downloads
Likes
...otron 8B UltraLong 4M Instruct4192K / 32.1 GB5463119
UltraLong Thinking4192K / 16.1 GB1083
...a 3.1 8B UltraLong 4M Instruct4192K / 32.1 GB17624
...otron 8B UltraLong 2M Instruct2096K / 32.1 GB118115
...a 3.1 8B UltraLong 2M Instruct2096K / 32.1 GB8759
...otron 8B UltraLong 1M Instruct1048K / 32.1 GB612351
...a 3.1 8B UltraLong 1M Instruct1048K / 32.1 GB138729
Zero Llama 3.1 8B Beta61048K / 16.1 GB2251
...dger Nu Llama 3.1 8B UltraLong1048K / 16.2 GB173
....1 1million Ctx Dark Planet 8B1048K / 32.3 GB113

Rank the Suzume Llama 3 8B Multilingual Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51262 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124