Tamillama Tiny 30M by RajuKandasamy

 ยป  All LLMs  ยป  RajuKandasamy  ยป  Tamillama Tiny 30M   URL Share it on

  Arxiv:2305.07759   Autotrain compatible Dataset:roneneldan/tinystories...   En   Endpoints compatible   Ggml   Llama   Pytorch   Quantized   Region:us   Safetensors   Ta

Tamillama Tiny 30m Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Tamillama Tiny 30M (RajuKandasamy/tamillama_tiny_30m)
๐ŸŒŸ Advertise your project ๐Ÿš€

Tamillama Tiny 30M Parameters and Internals

Model Type 
text generation, translation
Additional Notes 
This is a toy model for researchers, students, and LLM enthusiasts.
Supported Languages 
Tamil (Fluent), English (Moderate)
Training Details 
Data Sources:
roneneldan/TinyStoriesInstruct
Context Length:
512
Model Architecture:
LLaMA
Input Output 
Accepted Modalities:
text
Performance Tips:
Set max_new_tokens > 512 for translation feature.
LLM NameTamillama Tiny 30m
Repository ๐Ÿค—https://huggingface.co/RajuKandasamy/tamillama_tiny_30m 
Model Size30m
Required VRAM0.1 GB
Updated2025-08-21
MaintainerRajuKandasamy
Model Typellama
Model Files  0.1 GB   0.1 GB   0.1 GB
Supported Languagesta en
GGML QuantizationYes
Quantization Typeggml
Model ArchitectureLlamaForCausalLM
Licensegpl
Context Length2048
Model Max Length2048
Transformers Version4.31.0
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat32

Best Alternatives to Tamillama Tiny 30M

Best Alternatives
Context / RAM
Downloads
Likes
GPT4 X Alpasta 30B 4bit2K / 16.9 GB198868
Alpacino30b2K / 64.1 GB183668
Bluemoonrp 30B2K / 17.5 GB187324
GPT4 X Alpaca 30B 4bit2K / 16.9 GB1984162
OpenAssistant Llama 30B 4bit2K / 16.9 GB2369
Tenebra 30B Alpha01 FP1616K / 65 GB18339
Tenebra 30B Alpha01 4BIT16K / 19.4 GB10841
...nebra 30B Alpha01 EXL2 2 80bpw16K / 11.9 GB71
...nebra 30B Alpha01 EXL2 2 50bpw16K / 10.7 GB70
Tenebra 30B Alpha01 EXL2 3bpw16K / 12.7 GB70
Note: green Score (e.g. "73.2") means that the model is better than RajuKandasamy/tamillama_tiny_30m.

Rank the Tamillama Tiny 30M Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50804 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124