Personalized Phi 2 Sft Ultrachat Full Correct Prompt 5 Year by Br3ad

 ยป  All LLMs  ยป  Br3ad  ยป  Personalized Phi 2 Sft Ultrachat Full Correct Prompt 5 Year   URL Share it on

  Arxiv:1910.09700   Autotrain compatible   Conversational   Custom code   Endpoints compatible   Phi   Region:us   Safetensors   Sharded   Tensorflow

Personalized Phi 2 Sft Ultrachat Full Correct Prompt 5 Year Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
๐ŸŒŸ Advertise your project ๐Ÿš€

Personalized Phi 2 Sft Ultrachat Full Correct Prompt 5 Year Parameters and Internals

LLM NamePersonalized Phi 2 Sft Ultrachat Full Correct Prompt 5 Year
Repository ๐Ÿค—https://huggingface.co/Br3ad/personalized_phi-2-sft-ultrachat-full_correct-prompt_5_year 
Model Size2.8b
Required VRAM11.1 GB
Updated2025-06-09
MaintainerBr3ad
Model Typephi
Model Files  5.0 GB: 1-of-3   5.0 GB: 2-of-3   1.1 GB: 3-of-3
Model ArchitecturePhiForCausalLM
Context Length2048
Model Max Length2048
Transformers Version4.41.2
Tokenizer ClassCodeGenTokenizer
Padding Token<|endoftext|>
Vocabulary Size51200
Torch Data Typefloat32
Personalized Phi 2 Sft Ultrachat Full Correct Prompt 5 Year (Br3ad/personalized_phi-2-sft-ultrachat-full_correct-prompt_5_year)

Best Alternatives to Personalized Phi 2 Sft Ultrachat Full Correct Prompt 5 Year

Best Alternatives
Context / RAM
Downloads
Likes
MFANN3bv0.24128K / 11.1 GB150
MFANN3b128K / 11.1 GB150
MFANN Phigments Slerp V3.2128K / 5.6 GB170
MFANN3bv1.4128K / 11.1 GB110
MFANN3bv1.3128K / 11.1 GB50
MFANN3bv0.23128K / 11.1 GB130
MFANN3bv1.1128K / 11.1 GB110
MFANN3bv1.5128K / 11.1 GB80
MFANN3b Rebase128K / 11.1 GB130
MFANN Liminerity Slerp 4a128K / 5.6 GB110
Note: green Score (e.g. "73.2") means that the model is better than Br3ad/personalized_phi-2-sft-ultrachat-full_correct-prompt_5_year.

Rank the Personalized Phi 2 Sft Ultrachat Full Correct Prompt 5 Year Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 48023 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124