L3 8B Stheno V3.3 32K by Sao10K

 ยป  All LLMs  ยป  Sao10K  ยป  L3 8B Stheno V3.3 32K   URL Share it on

  Autotrain compatible   Conversational   En   Endpoints compatible   Llama   Region:us   Safetensors   Sharded   Tensorflow

L3 8B Stheno V3.3 32K Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
๐ŸŒŸ Advertise your project ๐Ÿš€

L3 8B Stheno V3.3 32K Parameters and Internals

Additional Notes 
Not a native 32K model; issues with long context understanding and reasoning; tested in bf16 mode; quantization effects unknown.
Supported Languages 
en (full)
Training Details 
Methodology:
Expanded to 32K Context with PoSE training; Cleaned roleplaying samples, added 2x creative writing samples, remade and refined instruct data.
Context Length:
32768
LLM NameL3 8B Stheno V3.3 32K
Repository ๐Ÿค—https://huggingface.co/Sao10K/L3-8B-Stheno-v3.3-32K 
Model Size8b
Required VRAM16.1 GB
Updated2025-06-10
MaintainerSao10K
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licensecc-by-nc-4.0
Context Length32768
Model Max Length32768
Transformers Version4.41.1
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|end_of_text|>
Vocabulary Size128256
Torch Data Typebfloat16
L3 8B Stheno V3.3 32K (Sao10K/L3-8B-Stheno-v3.3-32K)

Best Alternatives to L3 8B Stheno V3.3 32K

Best Alternatives
Context / RAM
Downloads
Likes
...otron 8B UltraLong 4M Instruct4192K / 32.1 GB3395108
UltraLong Thinking4192K / 16.1 GB5072
...a 3.1 8B UltraLong 4M Instruct4192K / 32.1 GB17624
...a 3.1 8B UltraLong 2M Instruct2096K / 32.1 GB8759
...otron 8B UltraLong 2M Instruct2096K / 32.1 GB62215
Zero Llama 3.1 8B Beta61048K / 16.1 GB10941
...otron 8B UltraLong 1M Instruct1048K / 32.1 GB175445
...a 3.1 8B UltraLong 1M Instruct1048K / 32.1 GB138729
...xis Bookwriter Llama3.1 8B Sft1048K / 16.1 GB634
....1 1million Ctx Dark Planet 8B1048K / 32.3 GB932
Note: green Score (e.g. "73.2") means that the model is better than Sao10K/L3-8B-Stheno-v3.3-32K.

Rank the L3 8B Stheno V3.3 32K Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 48075 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124