FluffyKaeloky Luminum V0.1 123B EXL2 2.7bpw H6 by BigHuggyD

 ยป  All LLMs  ยป  BigHuggyD  ยป  FluffyKaeloky Luminum V0.1 123B EXL2 2.7bpw H6   URL Share it on

  Autotrain compatible   Conversational   Endpoints compatible   Exl2   Merge   Mergekit   Mistral   Quantized   Region:us   Safetensors   Sharded   Tensorflow

FluffyKaeloky Luminum V0.1 123B EXL2 2.7bpw H6 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
๐ŸŒŸ Advertise your project ๐Ÿš€

FluffyKaeloky Luminum V0.1 123B EXL2 2.7bpw H6 Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
research, commercial applications
Applications:
daily text generation tasks
Primary Use Cases:
long context generation, creative descriptions
Limitations:
tends to generate long responses
Considerations:
Use specific settings to optimize performance.
Additional Notes 
Merged using Mistral template.
Training Details 
Methodology:
della_linear merge method
Input Output 
Input Format:
~~[INST] {input} [/INST] {output}~~
Accepted Modalities:
text
Performance Tips:
Use settings: Minp: 0.08, Rep penalty: 1.03, Rep penalty range: 4096, Smoothing factor: 0.23, No Repeat NGram Size: 2.
LLM NameFluffyKaeloky Luminum V0.1 123B EXL2 2.7bpw H6
Repository ๐Ÿค—https://huggingface.co/BigHuggyD/FluffyKaeloky_Luminum-v0.1-123B_exl2_2.7bpw_h6 
Model Size123b
Required VRAM42.3 GB
Updated2025-06-10
MaintainerBigHuggyD
Model Typemistral
Model Files  8.6 GB: 1-of-5   8.6 GB: 2-of-5   8.5 GB: 3-of-5   8.6 GB: 4-of-5   8.0 GB: 5-of-5
Quantization Typeexl2
Model ArchitectureMistralForCausalLM
Context Length131072
Model Max Length131072
Transformers Version4.44.2
Tokenizer ClassLlamaTokenizer
Vocabulary Size32768
Torch Data Typebfloat16
FluffyKaeloky Luminum V0.1 123B EXL2 2.7bpw H6 (BigHuggyD/FluffyKaeloky_Luminum-v0.1-123B_exl2_2.7bpw_h6)

Best Alternatives to FluffyKaeloky Luminum V0.1 123B EXL2 2.7bpw H6

Best Alternatives
Context / RAM
Downloads
Likes
Behemoth 123B V1.2128K / 226.4 GB27127
Behemoth V1.2 Magnum V4 123B128K / 216.7 GB3904
Monstral 123B V2128K / 245.4 GB17432
Gigaberg Mistral Large 123B128K / 222 GB242
Behemoth 123B V2.2128K / 226.4 GB925
Behemoth 123B V2.1128K / 226.4 GB4210
Behemoth 123B V2128K / 221.6 GB549
Cakrawala 123B128K / 222 GB463
Behemoth V2.2 Magnum V4 123B128K / 221.6 GB431
Behemoth 123B V1128K / 221.6 GB12841
Note: green Score (e.g. "73.2") means that the model is better than BigHuggyD/FluffyKaeloky_Luminum-v0.1-123B_exl2_2.7bpw_h6.

Rank the FluffyKaeloky Luminum V0.1 123B EXL2 2.7bpw H6 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 48075 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124