Aurelian V0.5 70B Rope8 32K Fp16 by grimulkan

 ยป  All LLMs  ยป  grimulkan  ยป  Aurelian V0.5 70B Rope8 32K Fp16   URL Share it on

  Autotrain compatible   Endpoints compatible   Fp16   Llama   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Aurelian V0.5 70B Rope8 32K Fp16 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Aurelian V0.5 70B Rope8 32K Fp16 (grimulkan/aurelian-v0.5-70b-rope8-32K-fp16)
๐ŸŒŸ Advertise your project ๐Ÿš€

Aurelian V0.5 70B Rope8 32K Fp16 Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
Research, Commercial applications
Applications:
Story writing, Roleplaying, Document Q&A and Summarization, Interactive World Exploration
Primary Use Cases:
Multi-Round Story Writing, Oneshot Story-writing, Multi-Round Story Planning/Brainstorming, Document Q&A and Summarization, Roleplaying (RP), Interactive World Exploration
Limitations:
No guarantees that the model won't generate NSFW content.
Considerations:
This model is capable of producing offensive and NSFW content. Please use with caution.
Training Details 
Data Sources:
Human-written stories from forums, fanfic websites, The Pile, Summaries of Wikipedia articles, Physical/Spatial Reasoning, Relational Reasoning, Theory of Mind problems, Document Editing Tasks, Sections of Airoboros 2.2.1/3.1, Sections of Surge Instruct, Proxy RP Logs, A fully re-generated version of Floyd Text Adventures, A fully re-generated version of the CYS dataset, NART synthetic therapy logs, Augmental-Stenisgate-Augmented, bluemoon_Karen_cleaned, PIPPA-augmented-dedup, LimaRP-augmented, Erotic Analysis used in reverse, Reading Comprehension, Unnatural Instructions, passkey-retrieval, Long Instructions, OpenORCA GPT4 outputs, Ultrachat Uncensored, ShareGPT Hyper Filtered, Claude Multiround, Wizard Vicuna Unfiltered, SODA Synthetic Dialogue
Methodology:
Fine-tuned with Llama-2 chat format
Input Output 
Accepted Modalities:
text
Performance Tips:
Treat the first prompt like you normally would the system prompt. Describe in detail, even if obvious, and bias the length of the output with your prompt.
Release Notes 
Version:
v0.5
Notes:
Greatly minimizes 'chatGPTisms'. Increased diversity of NSFW prose.
LLM NameAurelian V0.5 70B Rope8 32K Fp16
Repository ๐Ÿค—https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16 
Model Size70b
Required VRAM138.3 GB
Updated2025-09-13
Maintainergrimulkan
Model Typellama
Model Files  4.1 GB: 1-of-35   4.1 GB: 2-of-35   3.9 GB: 3-of-35   4.1 GB: 4-of-35   4.0 GB: 5-of-35   3.9 GB: 6-of-35   4.1 GB: 7-of-35   4.0 GB: 8-of-35   3.9 GB: 9-of-35   4.1 GB: 10-of-35   4.0 GB: 11-of-35   3.9 GB: 12-of-35   4.1 GB: 13-of-35   4.0 GB: 14-of-35   3.9 GB: 15-of-35   4.1 GB: 16-of-35   4.0 GB: 17-of-35   3.9 GB: 18-of-35   4.1 GB: 19-of-35   4.0 GB: 20-of-35   3.9 GB: 21-of-35   4.1 GB: 22-of-35   4.0 GB: 23-of-35   3.9 GB: 24-of-35   4.1 GB: 25-of-35   4.0 GB: 26-of-35   3.9 GB: 27-of-35   4.1 GB: 28-of-35   4.0 GB: 29-of-35   3.9 GB: 30-of-35   4.1 GB: 31-of-35   4.0 GB: 32-of-35   3.9 GB: 33-of-35   4.1 GB: 34-of-35   2.1 GB: 35-of-35
Quantization Typefp16
Model ArchitectureLlamaForCausalLM
Licenseunknown
Context Length4096
Model Max Length4096
Transformers Version4.34.1
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Aurelian V0.5 70B Rope8 32K Fp16

Best Alternatives
Context / RAM
Downloads
Likes
...B Instruct Gradient 1048K 8bit1024K / 75 GB205
...B Instruct Gradient 1048K 4bit1024K / 39.7 GB73
...B Instruct Gradient 1048K 2bit1024K / 21.9 GB132
...0B Instruct Gradient 262K 4bit256K / 39.7 GB133
...0B Instruct Gradient 262K 8bit256K / 75 GB72
...0B Instruct Gradient 262K 2bit256K / 21.9 GB91
... Gradient 262K 2.25bpw H6 EXL2256K / 22.2 GB70
...t Gradient 262K 4.0bpw H6 EXL2256K / 37.2 GB51
...t Gradient 262K 2.4bpw H6 EXL2256K / 23.5 GB50
...t Gradient 262K 3.5bpw H6 EXL2256K / 32.9 GB50
Note: green Score (e.g. "73.2") means that the model is better than grimulkan/aurelian-v0.5-70b-rope8-32K-fp16.

Rank the Aurelian V0.5 70B Rope8 32K Fp16 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51352 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124