Model Type |
| |||
Additional Notes |
| |||
Training Details |
|
LLM Name | Phi 3 Orpo V9.4 |
Repository ๐ค | https://huggingface.co/cstr/phi-3-orpo-v9_4 |
Base Model(s) | |
Model Size | 2.1b |
Required VRAM | 2.3 GB |
Updated | 2025-06-09 |
Maintainer | cstr |
Model Type | llama |
Model Files | |
Supported Languages | en |
Model Architecture | LlamaForCausalLM |
License | apache-2.0 |
Context Length | 4096 |
Model Max Length | 4096 |
Transformers Version | 4.40.0 |
Tokenizer Class | LlamaTokenizer |
Padding Token | <|endoftext|> |
Vocabulary Size | 32064 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Kanana 1.5 2.1B Instruct 2505 | 32K / 4.6 GB | 4114 | 18 |
Kanana 1.5 2.1B Base | 32K / 4.2 GB | 2431 | 7 |
Kanana Safeguard Prompt 2.1B | 8K / 4.2 GB | 233 | 10 |
Kanana Nano 2.1B Instruct | 8K / 4.2 GB | 3948 | 65 |
Kanana Nano 2.1B Base | 8K / 4.2 GB | 3010 | 36 |
DAMI Base Merge 0408 | 8K / 4.6 GB | 23 | 0 |
EcomGen 0.0.2v | 8K / 8.4 GB | 9 | 0 |
CT LLM Base | 4K / 4.3 GB | 27 | 10 |
CT LLM SFT DPO | 4K / 0 GB | 16 | 5 |
CT LLM SFT | 4K / 0 GB | 11 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐