L3 Bluuwhale SAO MIX 8B V1 Fp32 Merge Calc by Casual-Autopsy

 ยป  All LLMs  ยป  Casual-Autopsy  ยป  L3 Bluuwhale SAO MIX 8B V1 Fp32 Merge Calc   URL Share it on

  Merged Model   Autotrain compatible Base model:bluuwhale/l3-sao-mi... Base model:sao10k/l3-8b-lunari... Base model:sao10k/l3-8b-niitam... Base model:sao10k/l3-8b-stheno... Base model:sao10k/l3-8b-tamamo...   Conversational   Endpoints compatible   Llama   Region:us   Safetensors   Sharded   Tensorflow

L3 Bluuwhale SAO MIX 8B V1 Fp32 Merge Calc Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
๐ŸŒŸ Advertise your project ๐Ÿš€

L3 Bluuwhale SAO MIX 8B V1 Fp32 Merge Calc Parameters and Internals

LLM NameL3 Bluuwhale SAO MIX 8B V1 Fp32 Merge Calc
Repository ๐Ÿค—https://huggingface.co/Casual-Autopsy/L3-bluuwhale-SAO-MIX-8B-V1_fp32-merge-calc 
Base Model(s)  L3 SAO MIX 8B V1   L3 8B Niitama V1   L3 8B Lunaris V1   L3 8B Tamamo V1   Sao10K/L3-8B-Stheno-v3.2   bluuwhale/L3-SAO-MIX-8B-V1   Sao10K/L3-8B-Niitama-v1   Sao10K/L3-8B-Lunaris-v1   Sao10K/L3-8B-Tamamo-v1   Sao10K/L3-8B-Stheno-v3.2
Merged ModelYes
Model Size8b
Required VRAM16.1 GB
Updated2025-06-11
MaintainerCasual-Autopsy
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Model ArchitectureLlamaForCausalLM
Context Length8192
Model Max Length8192
Transformers Version4.42.3
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size128256
Torch Data Typebfloat16
L3 Bluuwhale SAO MIX 8B V1 Fp32 Merge Calc (Casual-Autopsy/L3-bluuwhale-SAO-MIX-8B-V1_fp32-merge-calc)

Best Alternatives to L3 Bluuwhale SAO MIX 8B V1 Fp32 Merge Calc

Best Alternatives
Context / RAM
Downloads
Likes
...otron 8B UltraLong 4M Instruct4192K / 32.1 GB3715108
UltraLong Thinking4192K / 16.1 GB8292
...a 3.1 8B UltraLong 4M Instruct4192K / 32.1 GB17624
...otron 8B UltraLong 2M Instruct2096K / 32.1 GB92315
...a 3.1 8B UltraLong 2M Instruct2096K / 32.1 GB8759
Zero Llama 3.1 8B Beta61048K / 16.1 GB14161
...otron 8B UltraLong 1M Instruct1048K / 32.1 GB207045
...a 3.1 8B UltraLong 1M Instruct1048K / 32.1 GB138729
...xis Bookwriter Llama3.1 8B Sft1048K / 16.1 GB634
....1 1million Ctx Dark Planet 8B1048K / 32.3 GB952
Note: green Score (e.g. "73.2") means that the model is better than Casual-Autopsy/L3-bluuwhale-SAO-MIX-8B-V1_fp32-merge-calc.

Rank the L3 Bluuwhale SAO MIX 8B V1 Fp32 Merge Calc Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 48104 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124