LLM Name | Franziska Maxtral 8x22B V1 |
Repository ๐ค | https://huggingface.co/Sao10K/Franziska-Maxtral-8x22B-v1 |
Model Size | 140.6b |
Required VRAM | 280.8 GB |
Updated | 2024-07-04 |
Maintainer | Sao10K |
Model Type | mixtral |
Model Files | |
Supported Languages | en |
Model Architecture | MixtralForCausalLM |
License | cc-by-nc-4.0 |
Context Length | 65536 |
Model Max Length | 65536 |
Transformers Version | 4.39.3 |
Tokenizer Class | LlamaTokenizer |
Vocabulary Size | 32000 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Zephyr Orpo 141B A35b V0.1 | 64K / 207.2 GB | 49 | 269 |
Mixtral 8x22B Instruct V0.1 | 64K / 221.4 GB | 10978 | 734 |
WizardLM 2 8x22B | 64K / 216.8 GB | 9831 | 406 |
Mixtral 8x22B V0.1 | 64K / 221.6 GB | 3890 | 230 |
Mixtral 8x22B V0.1 | 64K / 212 GB | 1184 | 672 |
Mixtral 8x22B V0.3 | 64K / 221.4 GB | 39 | 3 |
...ixtral 8x22B Instruct V0.1 FP8 | 64K / 140.9 GB | 62 | 0 |
Dolphin 2.9.2 Mixtral 8x22b | 64K / 207.2 GB | 9311 | 40 |
Dolphin 2.9.2 Mixtral 8x22b | 64K / 207.2 GB | 9059 | 41 |
XLAM 8x22b R | 64K / 211.8 GB | 763 | 45 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐