Kancil V0 Llama3 by afrizalha

 ยป  All LLMs  ยป  afrizalha  ยป  Kancil V0 Llama3   URL Share it on

  8-bit   Autotrain compatible   Dataset:catinthebag/tumpengqa   Id   Indonesia   Llama   Llama3   Lora   Region:us   Safetensors   Sharded   Tensorflow   Unsloth

Kancil V0 Llama3 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
๐ŸŒŸ Advertise your project ๐Ÿš€

Kancil V0 Llama3 Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
research, general AI hobbyists
Applications:
QA functionalities
Primary Use Cases:
Basic QA functionalities
Limitations:
Does not support multi-turn conversation
Considerations:
This is a research preview model with minimal safety curation. Ensure to use the model for non-commercial purposes and for having fun.
Additional Notes 
There is an issue with the dataset where the newline characters are interpreted as literal strings. Please keep the .replace() method to fix newline errors.
Supported Languages 
languages_supported (Indonesian), proficiency_level (Fluent)
Training Details 
Data Sources:
catinthebag/TumpengQA
Data Volume:
6.7 million words
Methodology:
Fine-tuned with QLoRA using Unsloth framework
Input Output 
Input Format:
User: {prompt} Asisten: {response}
Accepted Modalities:
text
Output Format:
Generated response
Performance Tips:
Changing the specific prompt template might lead to performance degradations.
Release Notes 
Version:
0.0
Notes:
First working prototype, supports basic QA functionalities only.
LLM NameKancil V0 Llama3
Repository ๐Ÿค—https://huggingface.co/afrizalha/Kancil-V0-llama3 
Model Size4.7b
Required VRAM5.8 GB
Updated2025-06-09
Maintainerafrizalha
Model Files  0.7 GB   4.7 GB: 1-of-2   1.1 GB: 2-of-2
Supported Languagesid
Model ArchitectureAutoModelForCausalLM
Licensellama3
Model Max Length8192
Is Biasednone
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|end_of_text|>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesdown_proj|q_proj|k_proj|o_proj|up_proj|gate_proj|v_proj
LoRA Alpha64
LoRA Dropout0
R Param64
Kancil V0 Llama3 (afrizalha/Kancil-V0-llama3)

Best Alternatives to Kancil V0 Llama3

Best Alternatives
Context / RAM
Downloads
Likes
ARFLLaMa0K / 5.8 GB180
Soro 34K 10K / 5.8 GB160
...Llm Finetuned V3 High Accuracy0K / 5.8 GB151
KoLlamaCredit0K / 8.3 GB130
Llama3 Petro Instruct V10K / 5.8 GB140
Note: green Score (e.g. "73.2") means that the model is better than afrizalha/Kancil-V0-llama3.

Rank the Kancil V0 Llama3 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 48046 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124