Model Type |
| ||||||
Additional Notes |
| ||||||
Training Details |
| ||||||
Input Output |
|
LLM Name | EasyContext 256K Danube2 1.8B |
Repository ๐ค | https://huggingface.co/PY007/EasyContext-256K-danube2-1.8b |
Model Size | 1.8b |
Required VRAM | 3.7 GB |
Updated | 2025-06-09 |
Maintainer | PY007 |
Model Type | llama |
Model Files | |
Model Architecture | LlamaForCausalLM |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.39.1 |
Tokenizer Class | LlamaTokenizer |
Vocabulary Size | 32000 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Ssh 1.8B | 8K / 3.7 GB | 17 | 0 |
Llm Jp 3 1.8B Instruct3 | 4K / 3.7 GB | 2032 | 2 |
Llm Jp 3 1.8B Instruct | 4K / 3.7 GB | 1976 | 24 |
Llm Jp 3 1.8B | 4K / 3.7 GB | 1995 | 14 |
Llm Jp 3 1.8B Instruct | 4K / 3.7 GB | 14 | 0 |
Qwen1.5 1.8B Llamafy | 4K / 3.7 GB | 12 | 1 |
Tinyllama 1.8B Trismegistus | 2K / 1.9 GB | 11 | 3 |
Llama1 S 1.8B Experimental | 2K / 7.3 GB | 24 | 4 |
TinyChat 1776K | 0.3K / 0 GB | 20 | 9 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐