LLM Name | Llm Jp 3.1 1.8B Instruct4 4bit |
Repository ๐ค | https://huggingface.co/mlx-community/llm-jp-3.1-1.8b-instruct4-4bit |
Base Model(s) | |
Model Size | 1.8b |
Required VRAM | 1.1 GB |
Updated | 2025-07-02 |
Maintainer | mlx-community |
Model Type | llama |
Model Files | |
Supported Languages | en ja |
Quantization Type | 4bit |
Model Architecture | LlamaForCausalLM |
License | apache-2.0 |
Context Length | 4096 |
Model Max Length | 4096 |
Transformers Version | 4.47.0 |
Tokenizer Class | PreTrainedTokenizer |
Padding Token | <PAD|LLM-jp> |
Vocabulary Size | 99584 |
Torch Data Type | bfloat16 |
Model |
Likes |
Downloads |
VRAM |
---|---|---|---|
...m Jp 3.1 1.8B Function Calling | 0 | 29 | 3 GB |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...m Jp 3.1 1.8B Function Calling | 4K / 3.7 GB | 29 | 0 |
Ssh 1.8B | 8K / 3.7 GB | 14 | 0 |
EasyContext 256K Danube2 1.8B | 8K / 3.7 GB | 13 | 5 |
Llm Jp 3.1 1.8B Instruct4 | 4K / 3.7 GB | 4993 | 3 |
Llm Jp 3 1.8B Instruct3 | 4K / 3.7 GB | 2647 | 2 |
Llm Jp 3 1.8B | 4K / 3.7 GB | 9291 | 14 |
Llm Jp 3 1.8B Instruct | 4K / 3.7 GB | 3028 | 24 |
Llm Jp 3 1.8B Instruct | 4K / 3.7 GB | 13 | 0 |
Qwen1.5 1.8B Llamafy | 4K / 3.7 GB | 15 | 1 |
Tinyllama 1.8B Trismegistus | 2K / 1.9 GB | 15 | 3 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐