Model Type |
| ||||||||||||||||||
Use Cases |
| ||||||||||||||||||
Additional Notes |
| ||||||||||||||||||
Supported Languages |
| ||||||||||||||||||
Training Details |
| ||||||||||||||||||
Input Output |
| ||||||||||||||||||
Release Notes |
|
LLM Name | Dbrx Base Converted V2 4bit Gptq Gptq |
Repository ๐ค | https://huggingface.co/LnL-AI/dbrx-base-converted-v2-4bit-gptq-gptq |
Updated | 2025-08-18 |
Maintainer | LnL-AI |
Model Type | dbrx |
GPTQ Quantization | Yes |
Quantization Type | gptq|4bit |
Model Architecture | DbrxForCausalLM |
License | other |
Transformers Version | 4.38.2 |
Tokenizer Class | TiktokenTokenizerWrapper |
Padding Token | <|endoftext|> |
Vocabulary Size | 100352 |
Torch Data Type | float16 |
Errors | replace |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...nverted V2 4bit Gptq Marlin V2 | 0K / GB | 4 | 1 |
Dbrx Instruct 4.25bpw EXL2 | 0K / 71.1 GB | 4 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐