Granite 4.0 H Small FP8 is an open-source language model by ibm-granite. Features: 32.6b LLM, VRAM: 33.6GB, Context: 128K, License: apache-2.0, Quantized.
| LLM Name | Granite 4.0 H Small FP8 |
| Repository ๐ค | https://huggingface.co/ibm-granite/granite-4.0-h-small-FP8 |
| Base Model(s) | |
| Model Size | 32.6b |
| Required VRAM | 33.6 GB |
| Updated | 2026-03-27 |
| Maintainer | ibm-granite |
| Model Type | granitemoehybrid |
| Model Files | |
| GGUF Quantization | Yes |
| Quantization Type | gguf |
| Model Architecture | GraniteMoeHybridForCausalLM |
| License | apache-2.0 |
| Context Length | 131072 |
| Model Max Length | 131072 |
| Transformers Version | 4.56.1 |
| Tokenizer Class | GPT2Tokenizer |
| Padding Token | <|pad|> |
| Vocabulary Size | 100352 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| ...ranite 4.0 H Small FP8 Dynamic | 128K / 33.6 GB | 51 | 3 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐