Phi 3 Mini Code Finetune 128K Instruct V1 by RDson

 ยป  All LLMs  ยป  RDson  ยป  Phi 3 Mini Code Finetune 128K Instruct V1   URL Share it on

  3   Autotrain compatible   Code   Conversational   Custom code Dataset:m-a-p/codefeedback-fil...   Endpoints compatible   Finetuned   Instruct   Phi   Phi-3   Phi3   Region:us   Safetensors   Sharded   Tensorflow

Phi 3 Mini Code Finetune 128K Instruct V1 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Phi 3 Mini Code Finetune 128K Instruct V1 (RDson/Phi-3-mini-code-finetune-128k-instruct-v1)
๐ŸŒŸ Advertise your project ๐Ÿš€

Phi 3 Mini Code Finetune 128K Instruct V1 Parameters and Internals

Additional Notes 
Due to limited resources and time, the training was partial.
Training Details 
Data Sources:
m-a-p/CodeFeedback-Filtered-Instruction
Methodology:
Finetuning on CodeFeedback-Filtered-Instruction for ~9-10hrs using a single 3090 24GB GPU. Training was only on half (0.5136) of the epoch.
Hardware Used:
single 3090 24GB
LLM NamePhi 3 Mini Code Finetune 128K Instruct V1
Repository ๐Ÿค—https://huggingface.co/RDson/Phi-3-mini-code-finetune-128k-instruct-v1 
Model Size3.8b
Required VRAM15.4 GB
Updated2025-08-18
MaintainerRDson
Model Typephi3
Instruction-BasedYes
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   5.0 GB: 3-of-4   0.4 GB: 4-of-4
Model ArchitecturePhi3ForCausalLM
Licenseother
Context Length131072
Model Max Length131072
Transformers Version4.41.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token<|endoftext|>
Vocabulary Size32040
Torch Data Typefloat32

Best Alternatives to Phi 3 Mini Code Finetune 128K Instruct V1

Best Alternatives
Context / RAM
Downloads
Likes
Phi 4 Mini Instruct128K / 7.7 GB199709579
Phi 3 Mini 128K Instruct128K / 7.7 GB10835721661
Phi 3.5 Mini Instruct128K / 7.7 GB229679902
MediPhi Instruct128K / 7.7 GB525242
NuExtract 1.5128K / 7.7 GB123197236
NuExtract V1.5128K / 7.7 GB10851189
Phi 4 Mini Instruct128K / 7.7 GB704920
MediPhi Clinical128K / 7.7 GB6629
Phi 3.5 Mini TitanFusion 0.1128K / 7.7 GB50
MediPhi PubMed128K / 7.7 GB5046
Note: green Score (e.g. "73.2") means that the model is better than RDson/Phi-3-mini-code-finetune-128k-instruct-v1.

Rank the Phi 3 Mini Code Finetune 128K Instruct V1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50729 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124