Autotrain Umberto Proclama by Proclama

 ยป  All LLMs  ยป  Proclama  ยป  Autotrain Umberto Proclama   URL Share it on

  Autotrain Base model:finetune:microsoft/... Base model:microsoft/phi-3-min...   Conversational   Dataset:proclama/umberto   Endpoints compatible   Instruct   Lora   Peft   Region:us   Safetensors   Tensorboard

Autotrain Umberto Proclama Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
๐ŸŒŸ Advertise your project ๐Ÿš€

Autotrain Umberto Proclama Parameters and Internals

Model Type 
text-generation
Additional Notes 
Model trained using AutoTrain.
Input Output 
Input Format:
Prompt content: "hi"
Accepted Modalities:
text
Output Format:
"Hello! How can I assist you today?"
LLM NameAutotrain Umberto Proclama
Repository ๐Ÿค—https://huggingface.co/Proclama/autotrain-umberto-proclama 
Base Model(s)  Phi 3 Mini 4K Instruct   microsoft/Phi-3-mini-4k-instruct
Required VRAM0.9 GB
Updated2025-06-09
MaintainerProclama
Instruction-BasedYes
Model Files  0.9 GB   0.0 GB
Model ArchitectureAutoModel
Licenseother
Model Max Length2048
Is Biasednone
Tokenizer ClassLlamaTokenizer
Padding Token<pad>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Moduleso_proj|down_proj|qkv_proj|gate_up_proj
LoRA Alpha32
LoRA Dropout0.05
R Param16
Autotrain Umberto Proclama (Proclama/autotrain-umberto-proclama)

Best Alternatives to Autotrain Umberto Proclama

Best Alternatives
Context / RAM
Downloads
Likes
Mamba Python0K / 2 GB130
...hi 3 Mini 4K Instruct Ct2 Int80K / 3.8 GB261
...l 8x7B Instruct V0.1 Llamafile0K /  GB201418
CSUMLM0K /  GB231
...hin 2.5 Mixtral 8x7b Llamafile0K /  GB2275
Instruct GPT J0K / 0 GB026
Vigogne Bloom 7b1 Instruct0K / 0.1 GB04
...a Instruction Fine Tune French0K / 0 GB04
MiniMaid L20K / 0 GB272
Lora Model0K / 0.1 GB11890
Note: green Score (e.g. "73.2") means that the model is better than Proclama/autotrain-umberto-proclama.

Rank the Autotrain Umberto Proclama Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 48046 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124