Iambe 20B DARE by athirdpath

 ยป  All LLMs  ยป  athirdpath  ยป  Iambe 20B DARE   URL Share it on

  Autotrain compatible   Endpoints compatible   Llama   Region:us   Safetensors   Sharded   Tensorflow

Iambe 20B DARE Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Iambe 20B DARE (athirdpath/Iambe-20b-DARE)
๐ŸŒŸ Advertise your project ๐Ÿš€

Iambe 20B DARE Parameters and Internals

Additional Notes 
The model is designed for a detailed understanding of anatomy and scene states while being personable and authentic in voice. The user holds responsibility for the deployment and output of this uncensored model.
Training Details 
Context Length:
4096
LLM NameIambe 20B DARE
Repository ๐Ÿค—https://huggingface.co/athirdpath/Iambe-20b-DARE 
Model Size20b
Required VRAM39.9 GB
Updated2025-08-23
Maintainerathirdpath
Model Typellama
Model Files  9.9 GB: 1-of-5   9.9 GB: 2-of-5   9.9 GB: 3-of-5   9.9 GB: 4-of-5   0.3 GB: 5-of-5
Model ArchitectureLlamaForCausalLM
Licensecc-by-nc-4.0
Context Length4096
Model Max Length4096
Transformers Version4.35.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typebfloat16

Quantized Models of the Iambe 20B DARE

Model
Likes
Downloads
VRAM
Iambe 20B DARE GGUF4888 GB
Iambe 20B DARE GPTQ1610 GB
Iambe 20B DARE AWQ1610 GB

Best Alternatives to Iambe 20B DARE

Best Alternatives
Context / RAM
Downloads
Likes
Internlm2 5 20B Llamafied256K / 39.9 GB10195
Internlm2 20B Llama32K / 39.6 GB168520
Stellaris Internlm2 20B R51232K / 39.8 GB53
Internlm2 Chat 20B Llama Old32K / 39.6 GB93
Internlm2 Base 20B Llama32K / 39.6 GB53
Internlm2 Base 20B Llama32K / 39.6 GB80
Deita 20B32K / 39.8 GB51
Bagel 20B V04 Llama32K / 39.6 GB197
Bagel DPO 20B V04 Llama32K / 39.6 GB143
Internlm2 Limarp Chat 20B32K / 39.6 GB63
Note: green Score (e.g. "73.2") means that the model is better than athirdpath/Iambe-20b-DARE.

Rank the Iambe 20B DARE Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50835 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124