Japanese Stablelm Instruct Beta 7B AWQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Japanese Stablelm Instruct Beta 7B AWQ   URL Share it on

  4-bit   Autotrain compatible   Awq Base model:quantized:stability... Base model:stabilityai/japanes... Dataset:kunishou/databricks-do... Dataset:kunishou/hh-rlhf-49k-j...   Dataset:kunishou/oasst1-89k-ja   Instruct   Ja   Japanese-stablelm   Llama   Quantized   Region:us   Safetensors

Japanese Stablelm Instruct Beta 7B AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Japanese Stablelm Instruct Beta 7B AWQ (TheBloke/japanese-stablelm-instruct-beta-7B-AWQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

Japanese Stablelm Instruct Beta 7B AWQ Parameters and Internals

Model Type 
llama
Use Cases 
Areas:
research, commercial applications
Limitations:
The pre-training dataset may have contained offensive or inappropriate content.
Considerations:
It is recommended not to use the model for applications that may cause harm or distress to individuals or groups.
Supported Languages 
Japanese (fluent)
Training Details 
Data Sources:
kunishou/hh-rlhf-49k-ja, kunishou/databricks-dolly-15k-ja, kunishou/oasst1-89k-ja
Model Architecture:
Llama2 transformer architecture
Safety Evaluation 
Ethical Considerations:
The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters, which can be reflected in the model generated text. Users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.
Input Output 
Input Format:
~~[INST] <> {prompt} [/INST]
Accepted Modalities:
text
Output Format:
text
LLM NameJapanese Stablelm Instruct Beta 7B AWQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/japanese-stablelm-instruct-beta-7B-AWQ 
Model NameJapanese StableLM Instruct Beta 7B
Model CreatorStability AI
Base Model(s)  ...nese Stablelm Instruct Beta 7B   stabilityai/japanese-stablelm-instruct-beta-7b
Model Size7b
Required VRAM3.9 GB
Updated2025-09-23
MaintainerTheBloke
Model Typellama
Instruction-BasedYes
Model Files  3.9 GB
Supported Languagesja
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length4096
Model Max Length4096
Transformers Version4.35.0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Japanese Stablelm Instruct Beta 7B AWQ

Best Alternatives
Context / RAM
Downloads
Likes
Llama 2 7B 32K Instruct AWQ32K / 3.9 GB102
CodeLlama 7B Instruct AWQ16K / 3.9 GB12794
...eechless Tora Code 7B V1.0 AWQ16K / 3.9 GB61
...ama 7B Instruct Hf W4 G128 AWQ16K / 3.9 GB60
CausalLM 7B AWQ8K / 5.8 GB133
...essianai 7B Chat Bilingual AWQ8K / 3.9 GB62
Leo Hessianai 7B Chat AWQ8K / 3.9 GB61
...epseek Math 7B Instruct AWQ Q44K / 4.8 GB130
Llama 2 7B Ft Instruct Es AWQ4K / 3.9 GB61
Swallow 7B Instruct AWQ4K / 4.1 GB61
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/japanese-stablelm-instruct-beta-7B-AWQ.

Rank the Japanese Stablelm Instruct Beta 7B AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51534 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124