| LLM Name | Internlm2 5 7B Chat 8bit |
| Repository ๐ค | https://huggingface.co/mlx-community/internlm2_5-7b-chat-8bit |
| Model Size | 7b |
| Required VRAM | 8.2 GB |
| Updated | 2025-10-22 |
| Maintainer | mlx-community |
| Model Type | internlm2 |
| Model Files | |
| Quantization Type | 8bit |
| Model Architecture | InternLM2ForCausalLM |
| License | other |
| Context Length | 32768 |
| Model Max Length | 32768 |
| Transformers Version | 4.41.0 |
| Is Biased | 0 |
| Tokenizer Class | InternLM2Tokenizer |
| Padding Token | </s> |
| Vocabulary Size | 92544 |
| Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Internlm2 5 7B Chat 4bit | 32K / 5.2 GB | 41 | 5 |
| Internlm2 5 7B Chat 4bit | 32K / 4.3 GB | 8 | 2 |
| InternLM2 Math Plus 7B 4bit | 8K / 5.2 GB | 0 | 0 |
| Internlm2 5 7B | 256K / 15.5 GB | 4762 | 17 |
| Internlm2 5 7B Chat 1M | 256K / 15.4 GB | 1015 | 72 |
| Internlm2 5 7B Chat | 32K / 15.4 GB | 33450 | 197 |
| Internlm2 7B | 32K / 15.5 GB | 25545 | 43 |
| MD Judge V0.2 Internlm2 7b | 32K / 15.4 GB | 4245 | 17 |
| ChemLLM 7B Chat 1 5 SFT | 32K / 15.4 GB | 1219 | 4 |
| Internlm2 Chat 7B | 32K / 15.4 GB | 44525 | 83 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐