Dalio Pretrained Finetuned 30B Bs4 4e 6lr 2epoch Seed1 by Jellywibble

 ยป  All LLMs  ยป  Jellywibble  ยป  Dalio Pretrained Finetuned 30B Bs4 4e 6lr 2epoch Seed1   URL Share it on

Dalio Pretrained Finetuned 30B Bs4 4e 6lr 2epoch Seed1 is an open-source language model by Jellywibble. Features: 30b LLM, VRAM: 121.7GB, Context: 2K, Fine-Tuned, LLM Explorer Score: 0.04.

  Autotrain compatible   Endpoints compatible   Finetuned   Opt   Pytorch   Region:us   Sharded

Dalio Pretrained Finetuned 30b Bs4 4e 6lr 2epoch Seed1 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Dalio Pretrained Finetuned 30B Bs4 4e 6lr 2epoch Seed1 (Jellywibble/dalio_pretrained_finetuned_30b-bs4-4e-6lr-2epoch-seed1)
๐ŸŒŸ Advertise your project ๐Ÿš€

Dalio Pretrained Finetuned 30B Bs4 4e 6lr 2epoch Seed1 Parameters and Internals

LLM NameDalio Pretrained Finetuned 30b Bs4 4e 6lr 2epoch Seed1
Repository ๐Ÿค—https://huggingface.co/Jellywibble/dalio_pretrained_finetuned_30b-bs4-4e-6lr-2epoch-seed1 
Model Size30b
Required VRAM121.7 GB
Updated2025-03-13
MaintainerJellywibble
Model Typeopt
Model Files  9.7 GB: 1-of-13   9.9 GB: 2-of-13   9.9 GB: 3-of-13   9.9 GB: 4-of-13   9.9 GB: 5-of-13   9.9 GB: 6-of-13   9.9 GB: 7-of-13   9.9 GB: 8-of-13   9.9 GB: 9-of-13   9.9 GB: 10-of-13   9.9 GB: 11-of-13   9.9 GB: 12-of-13   3.1 GB: 13-of-13
Model ArchitectureOPTForCausalLM
Context Length2048
Model Max Length2048
Transformers Version4.25.0.dev0
Vocabulary Size50265
Torch Data Typefloat32
Activation Functionrelu

Best Alternatives to Dalio Pretrained Finetuned 30B Bs4 4e 6lr 2epoch Seed1

Best Alternatives
Context / RAM
Downloads
Likes
Galpaca 30B MiniOrca2K / 59.6 GB17421
Galpaca 30B2K / 60.8 GB83655
OPT 30B Erebus2K / 36 GB171266
Opt Iml Max 30B2K / 60.1 GB91135
Opt 30B2K / 60.1 GB8011136
...alactica 30B Evol Instruct 70K2K / 60.1 GB161823
Opt Iml 30B2K / 60.1 GB4774
Galactica 30B2K / 60.8 GB67440
Dalio Io 30B Cp15002K / 121.7 GB110
Galactica 30B2K /  GB51
Note: green Score (e.g. "73.2") means that the model is better than Jellywibble/dalio_pretrained_finetuned_30b-bs4-4e-6lr-2epoch-seed1.

Rank the Dalio Pretrained Finetuned 30B Bs4 4e 6lr 2epoch Seed1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52394 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a