Llama With Eeve The Third 04 150M by kikikara

 Β»  All LLMs  Β»  kikikara  Β»  Llama With Eeve The Third 04 150M   URL Share it on

Llama With Eeve The Third 04 150M is an open-source language model by kikikara. Features: 150m LLM, VRAM: 0.6GB, Context: 32K, LLM Explorer Score: 0.14.

  Conversational   Dataset:haerae-hub/k2-feedback Dataset:haerae-hub/korean-huma...   Dataset:heegyu/aulm-0809 Dataset:heegyu/cot-collection-...   Dataset:heegyu/kowikitext Dataset:heegyu/open-korean-ins... Dataset:instructkr/ko elo aren... Dataset:markrai/kocommercial-d...   Dataset:maywell/ko wikidata qa   Dataset:nlpai-lab/kullm-v2   Endpoints compatible   Ko   Llama   Region:us   Safetensors

Llama With Eeve The Third 04 150M Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama With Eeve The Third 04 150M (kikikara/llama_with_eeve_the_third_04_150M)
🌟 Advertise your project πŸš€

Llama With Eeve The Third 04 150M Parameters and Internals

Model Type 
text-generation
Supported Languages 
ko (proficient)
Training Details 
Data Sources:
maywell/ko_wikidata_QA, nlpai-lab/kullm-v2, heegyu/kowikitext, MarkrAI/KoCommercial-Dataset, heegyu/CoT-collection-ko, HAERAE-HUB/Korean-Human-Judgements, instructkr/ko_elo_arena_0207, HAERAE-HUB/K2-Feedback, heegyu/open-korean-instructions, heegyu/aulm-0809
Methodology:
Random initialization with a pre-trained model. System prompt included during training: '당신은 λΉ„λ„λ•μ μ΄κ±°λ‚˜, μ„±μ μ΄κ±°λ‚˜, λΆˆλ²•μ μ΄κ±°λ‚˜ λ˜λŠ” μ‚¬νšŒ ν†΅λ…μ μœΌλ‘œ ν—ˆμš©λ˜μ§€ μ•ŠλŠ” λ°œμ–Έμ€ ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€. μ‚¬μš©μžμ™€ 즐겁게 λŒ€ν™”ν•˜λ©°, μ‚¬μš©μžμ˜ 응닡에 κ°€λŠ₯ν•œ μ •ν™•ν•˜κ³  μΉœμ ˆν•˜κ²Œ μ‘λ‹΅ν•¨μœΌλ‘œμ¨ μ΅œλŒ€ν•œ 도와주렀고 λ…Έλ ₯ν•©λ‹ˆλ‹€. {question}'
Model Architecture:
llama architecture with eeve tokenizer
Input Output 
Input Format:
Prompts must include the system format specified: '### System:...'
Accepted Modalities:
text
Output Format:
Generates text based on prompts
LLM NameLlama With Eeve The Third 04 150M
Repository πŸ€—https://huggingface.co/kikikara/llama_with_eeve_the_third_04_150M 
Model Size150m
Required VRAM0.6 GB
Updated2026-04-09
Maintainerkikikara
Model Typellama
Model Files  0.6 GB
Supported Languagesko
Model ArchitectureLlamaForCausalLM
Context Length32768
Model Max Length32768
Transformers Version4.41.1
Tokenizer ClassLlamaTokenizer
Padding Token<|im_end|>
Vocabulary Size40960
Torch Data Typefloat32

Best Alternatives to Llama With Eeve The Third 04 150M

Best Alternatives
Context / RAM
Downloads
Likes
Llama With Eeve New 03 150m32K / 0.6 GB51
Llama With Eeve New 150m32K / 0.6 GB50
...ma With Eeve The Third 06 150M32K / 0.6 GB50
... Eeve The Third 04 Law 01 150M32K / 0.6 GB50
...th Eeve The Third 04 Math 150M32K / 0.6 GB50
Llm Jp 3 150M Instruct34K / 0.3 GB1872
Llm Jp 3 150M Instruct24K / 0.3 GB840
Llama 150M Fresh2K / 0.9 GB27341
Note: green Score (e.g. "73.2") means that the model is better than kikikara/llama_with_eeve_the_third_04_150M.

Rank the Llama With Eeve The Third 04 150M Capabilities

πŸ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52721 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum β€” our secure, self-hosted AI agent for server management.
Release v20260328a