Bawialniagpt by DuckyBlender

 ยป  All LLMs  ยป  DuckyBlender  ยป  Bawialniagpt   URL Share it on

  Autotrain compatible   Conversational   Custom code Dataset:duckyblender/bawialnia...   Endpoints compatible   Instruct   Low quality   Phi3   Pl   Region:us   Safetensors

Bawialniagpt Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Bawialniagpt (DuckyBlender/bawialniagpt)
๐ŸŒŸ Advertise your project ๐Ÿš€

Bawialniagpt Parameters and Internals

Model Type 
text generation, joke
Use Cases 
Limitations:
The model is not suitable for any practical applications., The model may generate nonsensical or offensive responses., The model may not respond at all, or respond with complete gibberish.
Additional Notes 
This model is a joke and is intended only for entertainment purposes. It's overtrained and generates random polish-ish garbage.
Supported Languages 
polish (low quality)
Training Details 
Data Sources:
Bawialnia Telegram Group Dataset
Methodology:
QLora fine-tuning
Training Time:
~8 hours (3 epochs)
Hardware Used:
RTX 4060
Model Architecture:
PHI-3
LLM NameBawialniagpt
Repository ๐Ÿค—https://huggingface.co/DuckyBlender/bawialniagpt 
Model Size3.8b
Required VRAM7.6 GB
Updated2025-09-23
MaintainerDuckyBlender
Model Typephi3
Instruction-BasedYes
Model Files  7.6 GB
Supported Languagespl
Model ArchitecturePhi3ForCausalLM
Licensegpl-3.0
Context Length4096
Model Max Length4096
Transformers Version4.40.1
Tokenizer ClassLlamaTokenizer
Padding Token<|endoftext|>
Vocabulary Size32064
Torch Data Typebfloat16

Best Alternatives to Bawialniagpt

Best Alternatives
Context / RAM
Downloads
Likes
Phi 4 Mini Instruct128K / 7.7 GB268462600
Phi 3 Mini 128K Instruct128K / 7.7 GB7300291673
Phi 3.5 Mini Instruct128K / 7.7 GB291298909
MediPhi Instruct128K / 7.7 GB466144
NuExtract 1.5128K / 7.7 GB158853239
MediPhi Clinical128K / 7.7 GB70809
NuExtract V1.5128K / 7.7 GB10851189
Phi 3.5 Mini TitanFusion 0.1128K / 7.7 GB50
Phi 4 Mini Instruct128K / 7.7 GB493020
MediPhi MedCode128K / 7.7 GB8453
Note: green Score (e.g. "73.2") means that the model is better than DuckyBlender/bawialniagpt.

Rank the Bawialniagpt Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51545 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124