Model Type |
| ||||||
Training Details |
|
LLM Name | UNA 34BeagleSimpleMath 32K V1 |
Repository ๐ค | https://huggingface.co/fblgit/UNA-34BeagleSimpleMath-32K-v1 |
Model Size | 34.4b |
Required VRAM | 69.2 GB |
Updated | 2025-06-09 |
Maintainer | fblgit |
Model Type | llama |
Model Files | |
Model Architecture | LlamaForCausalLM |
License | apache-2.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.37.0.dev0 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <s> |
Vocabulary Size | 64000 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Pallas 0.5 LASER 0.5 | 195K / 68.9 GB | 186 | 0 |
Pallas 0.5 LASER 0.4 | 195K / 68.9 GB | 205 | 1 |
Pallas 0.5 LASER 0.3 | 195K / 68.9 GB | 202 | 0 |
Pallas 0.5 LASER 0.2 | 195K / 68.9 GB | 191 | 0 |
Pallas 0.5 LASER 0.6 | 195K / 68.9 GB | 13 | 5 |
Pallas 0.5 LASER Exp2 0.1 | 195K / 68.9 GB | 14 | 0 |
Yi Bagel 2x34b | 195K / 68.8 GB | 14 | 2 |
AnFeng V3.1 Avocet | 128K / 69.2 GB | 1690 | 0 |
UNA 34Beagles 32K Bf16 V1 | 32K / 69.2 GB | 141 | 10 |
PiVoT SUS RP | 8K / 69.2 GB | 127 | 5 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐