Aetheria L2 70B GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Aetheria L2 70B GPTQ   URL Share it on

  4-bit   Autotrain compatible Base model:quantized:royallab/... Base model:royallab/aetheria-l...   En   Gptq   Llama   Llama 2   Quantized   Region:us   Safetensors

Aetheria L2 70B GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Aetheria L2 70B GPTQ (TheBloke/Aetheria-L2-70B-GPTQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

Aetheria L2 70B GPTQ Parameters and Internals

Model Type 
llama, text-generation
Use Cases 
Areas:
collaborative storytelling, roleplay
Applications:
creative prose training
Primary Use Cases:
roleplaying chat
Limitations:
not intended for supplying factual information or advice
Input Output 
Input Format:
### Instruction:\nCharacter's Persona: {bot character description}\nUser's Persona: {user character description}\nScenario: {what happens in the story}\nPlay the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.\n### Input:\nUser: {utterance}\n### Response:\nCharacter: {utterance}
Output Format:
Character: {utterance}
Performance Tips:
Message length control is available using specific keywords like `micro, tiny, short, medium, long, massive, huge, enormous, humongous, unlimited`.
LLM NameAetheria L2 70B GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/Aetheria-L2-70B-GPTQ 
Model NameAetheria L2 70B
Model CreatorThe Royal Lab
Base Model(s)  Aetheria L2 70B   royallab/Aetheria-L2-70B
Model Size70b
Required VRAM35.3 GB
Updated2025-09-20
MaintainerTheBloke
Model Typellama
Model Files  35.3 GB
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length4096
Model Max Length4096
Transformers Version4.35.2
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Aetheria L2 70B GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
...B Instruct AutoRound GPTQ 4bit128K / 39.9 GB12656
...B Instruct AutoRound GPTQ 4bit128K / 39.9 GB10360
...ama 3.1 70B Instruct Gptq 4bit128K / 39.9 GB134
Opus V1.2 70B Marlin32K / 36.4 GB50
MoMo 70B Lora 1.8.4 DPO GPTQ32K / 41.3 GB81
MoMo 70B Lora 1.8.6 DPO GPTQ32K / 41.3 GB51
Midnight Miqu 70B V1.5 GPTQ32G31K / 40.7 GB1894
Tess 70B V1.6 Marlin31K / 36.3 GB71
...Midnight Miqu 70B V1.0 GPTQ32G31K / 40.7 GB72
Senku 70B GPTQ 4bit31K / 36.7 GB61
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Aetheria-L2-70B-GPTQ.

Rank the Aetheria L2 70B GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51483 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124