Tibetan Roberta Causal Base by sangjeedondrub

 ยป  All LLMs  ยป  sangjeedondrub  ยป  Tibetan Roberta Causal Base   URL Share it on

  Autotrain compatible   Bo   Endpoints compatible   Pytorch   Region:us   Roberta   Tibetan

Tibetan Roberta Causal Base Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Tibetan Roberta Causal Base (sangjeedondrub/tibetan-roberta-causal-base)
๐ŸŒŸ Advertise your project ๐Ÿš€

Tibetan Roberta Causal Base Parameters and Internals

Model Type 
pretrained causal language model
Additional Notes 
The model is developed to improve familiarity with Transformers APIs.
Supported Languages 
Tibetan (Proficient)
LLM NameTibetan Roberta Causal Base
Repository ๐Ÿค—https://huggingface.co/sangjeedondrub/tibetan-roberta-causal-base 
Required VRAM0.3 GB
Updated2025-08-22
Maintainersangjeedondrub
Model Typeroberta
Model Files  0.3 GB
Supported Languagesbo
Model ArchitectureRobertaForCausalLM
Licensemit
Context Length514
Model Max Length514
Transformers Version4.18.0
Tokenizer ClassRobertaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size52000
Torch Data Typefloat32
Errorsreplace

Best Alternatives to Tibetan Roberta Causal Base

Best Alternatives
Context / RAM
Downloads
Likes
...ustom Functions Dataset Python0.5K / 0.5 GB21
...oberta Base TF Weight1 Epoch100.5K / 0.5 GB70
...Roberta Base TF Weight1 Epoch50.5K / 0.5 GB50
...oberta Base TF Weight1 Epoch150.5K / 0.5 GB50
...Roberta Base TF Weight2 Epoch50.5K / 0.5 GB50
...berta Base TF Weight0.5 Epoch50.5K / 0.5 GB50
...se Finetuned Mbti 0912 Weight00.5K / 0.5 GB130
...berta Base Finetuned Mbti 09110.5K / 0.5 GB50
...berta Base Finetuned Mbti 09010.5K / 0.5 GB50
Math Roberta0.5K / 0 GB34
Note: green Score (e.g. "73.2") means that the model is better than sangjeedondrub/tibetan-roberta-causal-base.

Rank the Tibetan Roberta Causal Base Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50835 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124