| Model Type |
| ||||||||||||
| Additional Notes |
| ||||||||||||
| Supported Languages |
| ||||||||||||
| Training Details |
| ||||||||||||
| Input Output |
|
| LLM Name | Calm3 22B RP V0.1 |
| Repository ๐ค | https://huggingface.co/Aratako/calm3-22b-RP-v0.1 |
| Base Model(s) | |
| Model Size | 22b |
| Required VRAM | 44.9 GB |
| Updated | 2025-09-01 |
| Maintainer | Aratako |
| Model Type | llama |
| Model Files | |
| Supported Languages | ja |
| Model Architecture | LlamaForCausalLM |
| License | cc-by-nc-sa-4.0 |
| Context Length | 16384 |
| Model Max Length | 16384 |
| Transformers Version | 4.44.0 |
| Tokenizer Class | GPTNeoXTokenizer |
| Padding Token | <|padding|> |
| Vocabulary Size | 65024 |
| Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Calm3 22B Chat | 16K / 44.9 GB | 2965 | 79 |
| Calm3 22B RP V2 | 16K / 44.9 GB | 7 | 19 |
| Llama2 22B Daydreamer V3 | 4K / 43.7 GB | 1814 | 11 |
| Platypus 2 22B Relora | 4K / 43.7 GB | 1766 | 1 |
| Llama2 22B | 4K / 43.7 GB | 1764 | 46 |
| Yousei 22B | 4K / 44.5 GB | 615 | 2 |
| Llama2 22B Blocktriangular | 4K / 43.7 GB | 1787 | 4 |
| Llama2 22B Daydreamer V2 | 4K / 43.7 GB | 6 | 2 |
| Llama2 22B Daydreamer V1 | 4K / 43.7 GB | 7 | 2 |
| Llama2 22B Empath Alpacagpt4 | 4K / 43.7 GB | 8 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐