DociproLLM 7B is an open-source language model by sartifyllc. Features: 7b LLM, VRAM: 27.8GB, Context: 2K, HF Score: 44.2, LLM Explorer Score: 0.14, Arc: 47.9, HellaSwag: 78.1, MMLU: 27.8, TruthfulQA: 34.3, WinoGrande: 72.5, GSM8K: 4.6.
| LLM Name | DociproLLM 7B |
| Repository ๐ค | https://huggingface.co/sartifyllc/dociproLLM-7B |
| Model Size | 7b |
| Required VRAM | 27.8 GB |
| Updated | 2026-03-31 |
| Maintainer | sartifyllc |
| Model Type | falcon |
| Model Files | |
| Model Architecture | FalconForCausalLM |
| Context Length | 2048 |
| Model Max Length | 2048 |
| Transformers Version | 4.36.0.dev0 |
| Is Biased | 0 |
| Tokenizer Class | PreTrainedTokenizerFast |
| Vocabulary Size | 65024 |
| Torch Data Type | float32 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| F1H10M 0000 | 2K / 13.9 GB | 2529 | 0 |
| Saqr 7B Merged | 2K / 13.9 GB | 473 | 1 |
| ... Openassistant Toxicity Reduce | 2K / 30.4 GB | 5 | 0 |
| F1H10M 0000 | 2K / 13.9 GB | 14 | 0 |
| Falcon Chatbot | 2K / 5.5 GB | 3 | 1 |
| Really Tiny Falcon Testing | 2K / 0 GB | 9004 | 2 |
| New Falcon | 2K / 4.3 GB | 8 | 1 |
| Claire 7B 0.1 Instruct | 2K / 7.2 GB | 8 | 1 |
| Tiny Testing Falcon Alibi | 2K / 0 GB | 1427 | 1 |
| ...lcon7b Linear Equations Merged | 2K / 13.9 GB | 4 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐