Platypus 30B by lilloukas

 ยป  All LLMs  ยป  lilloukas  ยป  Platypus 30B   URL Share it on

Platypus 30B is an open-source language model by lilloukas. Features: 30b LLM, VRAM: 64.8GB, Context: 2K, License: other, HF Score: 59, LLM Explorer Score: 0.11, Arc: 64.6, HellaSwag: 84.3, MMLU: 64.2, TruthfulQA: 45.4, WinoGrande: 81.4, GSM8K: 14.4.

  Arxiv:2302.13971   En   Endpoints compatible   Llama   Pytorch   Region:us   Safetensors   Sharded   Tensorflow

Platypus 30B Benchmarks

Platypus 30B (garage-bAInd/Platypus-30B)
๐ŸŒŸ Advertise your project ๐Ÿš€

Platypus 30B Parameters and Internals

Model Type 
auto-regressive language model
Additional Notes 
Base LLaMA model is trained on various data, which may contain offensive, harmful, and biased content.
Supported Languages 
English (Proficient)
Training Details 
Data Sources:
Highly filtered and curated question and answer pairs
Methodology:
Instruction fine-tuned using LoRA
Hardware Used:
4 A100 80GB
Model Architecture:
LLaMA transformer architecture
LLM NamePlatypus 30B
Repository ๐Ÿค—https://huggingface.co/garage-bAInd/Platypus-30B 
Model Size30b
Required VRAM64.8 GB
Updated2026-04-02
Maintainerlilloukas
Model Typellama
Model Files  2.9 GB: 1-of-23   2.9 GB: 2-of-23   2.9 GB: 3-of-23   3.0 GB: 4-of-23   3.0 GB: 5-of-23   2.9 GB: 6-of-23   2.9 GB: 7-of-23   3.0 GB: 8-of-23   3.0 GB: 9-of-23   2.9 GB: 10-of-23   2.9 GB: 11-of-23   3.0 GB: 12-of-23   3.0 GB: 13-of-23   2.9 GB: 14-of-23   2.9 GB: 15-of-23   3.0 GB: 16-of-23   3.0 GB: 17-of-23   2.9 GB: 18-of-23   2.9 GB: 19-of-23   3.0 GB: 20-of-23   3.0 GB: 21-of-23   2.9 GB: 22-of-23   0.4 GB: 23-of-23   2.9 GB: 1-of-23   2.9 GB: 2-of-23   2.9 GB: 3-of-23   3.0 GB: 4-of-23   3.0 GB: 5-of-23   2.9 GB: 6-of-23   2.9 GB: 7-of-23   3.0 GB: 8-of-23   3.0 GB: 9-of-23   2.9 GB: 10-of-23   2.9 GB: 11-of-23   3.0 GB: 12-of-23   3.0 GB: 13-of-23   2.9 GB: 14-of-23   2.9 GB: 15-of-23   3.0 GB: 16-of-23   3.0 GB: 17-of-23   2.9 GB: 18-of-23   2.9 GB: 19-of-23   3.0 GB: 20-of-23   3.0 GB: 21-of-23   2.9 GB: 22-of-23
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length2048
Model Max Length2048
Transformers Version4.30.0.dev0
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Platypus 30B

Best Alternatives
Context / RAM
Downloads
Likes
TildeOpen 30B64K / 60.9 GB3353158
Flash Llama 30M 2000132K / 0.1 GB3330
Smaug Slerp 30B V0.132K / 60.4 GB50
Tenebra 30B Alpha0116K / 65 GB4113
Llama33b 16K16K / 65.2 GB31
Yayi2 30B Llama4K / 121.2 GB92222
... Tokens By Perplexity Bottom K4K / 5.4 GB50
...lue Sample With Temperature2.04K / 5.4 GB80
...via Sample With Temperature2.04K / 5.4 GB50
... Tokens By Writing Style Top K4K / 5.4 GB50
Note: green Score (e.g. "73.2") means that the model is better than garage-bAInd/Platypus-30B.

Rank the Platypus 30B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52473 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a