Fantastica 7B Instruct 0.2 Italian by scribis

 ยป  All LLMs  ยป  scribis  ยป  Fantastica 7B Instruct 0.2 Italian   URL Share it on

  Adapter Base model:adapter:mistralai/m... Base model:mistralai/mistral-7... Dataset:scribis/corpus-frasi-d... Dataset:scribis/wikipedia-it-d... Dataset:scribis/wikipedia-it-m... Dataset:scribis/wikipedia-it-t... Dataset:scribis/wikipedia it t...   Finetuned   Finetuning   Instruct   It   Italian   Lora   Mistral   Peft   Region:us   Safetensors

Fantastica 7B Instruct 0.2 Italian Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Fantastica 7B Instruct 0.2 Italian (scribis/Fantastica-7b-Instruct-0.2-Italian)
๐ŸŒŸ Advertise your project ๐Ÿš€

Fantastica 7B Instruct 0.2 Italian Parameters and Internals

Model Type 
text generation, large language model, fine-tuned
Use Cases 
Areas:
literature, art description
Applications:
novel plot generation, description of paintings, text generation in literary styles
Primary Use Cases:
Instruction style text generation in Italian, Generation of texts in literary styles of Italian authors
Limitations:
May not handle complex or nuanced queries well, Possible generation of factually incorrect or nonsensical responses
Considerations:
Use outputs with caution and verify carefully
Additional Notes 
First model in a series dedicated to Italian literature.
Supported Languages 
It (Native)
Training Details 
Data Sources:
Wikipedia_it_Trame_Romanzi, Wikipedia-it-Descrizioni-di-Dipinti, Wikipedia-it-Trame-di-Film, Corpus-Frasi-da-Opere-Letterarie, Wikipedia-it-Mitologia-Greca
Methodology:
Instruction finetuning style using PEFT (Parameter Efficient Fine-Tuning)
Training Time:
70 hours
Hardware Used:
Google Colab A100
Input Output 
Input Format:
[INST]{instruction}[/INST]
Accepted Modalities:
text
Output Format:
Italian text generation
Performance Tips:
Use carefully formatted prompts for best results
LLM NameFantastica 7B Instruct 0.2 Italian
Repository ๐Ÿค—https://huggingface.co/scribis/Fantastica-7b-Instruct-0.2-Italian 
Base Model(s)  mistralai/Mistral-7B-Instruct-v0.2   mistralai/Mistral-7B-Instruct-v0.2
Model Size7b
Required VRAM0.4 GB
Updated2025-08-15
Maintainerscribis
Instruction-BasedYes
Model Files  0.4 GB
Supported Languagesit
Model ArchitectureAdapter
Licenseapache-2.0
Is Biasednone
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesq_proj|gate_proj|o_proj|v_proj|k_proj
LoRA Alpha16
LoRA Dropout0.1
R Param64

Best Alternatives to Fantastica 7B Instruct 0.2 Italian

Best Alternatives
Context / RAM
Downloads
Likes
Qwen Megumin0K / 0.1 GB61
Deepthink Reasoning Adapter0K / 0.2 GB33
Mistral 7B Instruct Sa V0.10K / 0 GB50
Qwen2.5 7b NotesCorrector0K / 0.6 GB100
...82 6142 45d8 9455 Bc68ca4866eb0K / 1.2 GB50
...al 7B Instruct V0.3 17193012560K / 0.9 GB120
Text To Rule Mistral 20K / 0.3 GB50
Mistral 7B Selfplay V00K / 0.2 GB70
...al 7B Instruct V0.3 17192465050K / 0 GB160
...Sql Flash Attention 2 Dataeval0K / 1.9 GB23
Note: green Score (e.g. "73.2") means that the model is better than scribis/Fantastica-7b-Instruct-0.2-Italian.

Rank the Fantastica 7B Instruct 0.2 Italian Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51369 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124