Llama 3.1 70B FLDx2 Tess3 Abliterated Fusion Norm by Nexesenex

 ยป  All LLMs  ยป  Nexesenex  ยป  Llama 3.1 70B FLDx2 Tess3 Abliterated Fusion Norm   URL Share it on

Llama 3.1 70B FLDx2 Tess3 Abliterated Fusion Norm is an open-source language model by Nexesenex. Features: 70b LLM, VRAM: 141.9GB, Context: 128K, Merged, LLM Explorer Score: 0.2.

  Merged Model   Autotrain compatible Base model:hitachi-nlp/llama-3... Base model:migtissera/tess-3-l...   Conversational   Endpoints compatible   Llama   Region:us   Safetensors   Sharded   Tensorflow

Llama 3.1 70b FLDx2 Tess3 Abliterated Fusion Norm Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 3.1 70B FLDx2 Tess3 Abliterated Fusion Norm (Nexesenex/Llama_3.1_70b_FLDx2-Tess3_abliterated_fusion_norm)
๐ŸŒŸ Advertise your project ๐Ÿš€

Llama 3.1 70B FLDx2 Tess3 Abliterated Fusion Norm Parameters and Internals

LLM NameLlama 3.1 70b FLDx2 Tess3 Abliterated Fusion Norm
Repository ๐Ÿค—https://huggingface.co/Nexesenex/Llama_3.1_70b_FLDx2-Tess3_abliterated_fusion_norm 
Base Model(s)  hitachi-nlp/Llama-3.1-70B-FLDx2   migtissera/Tess-3-Llama-3.1-70B   hitachi-nlp/Llama-3.1-70B-FLDx2   migtissera/Tess-3-Llama-3.1-70B
Merged ModelYes
Model Size70b
Required VRAM141.9 GB
Updated2025-09-22
MaintainerNexesenex
Model Typellama
Model Files  4.7 GB: 1-of-30   4.7 GB: 2-of-30   5.0 GB: 3-of-30   5.0 GB: 4-of-30   4.7 GB: 5-of-30   4.7 GB: 6-of-30   4.7 GB: 7-of-30   5.0 GB: 8-of-30   5.0 GB: 9-of-30   4.7 GB: 10-of-30   4.7 GB: 11-of-30   4.7 GB: 12-of-30   5.0 GB: 13-of-30   5.0 GB: 14-of-30   4.7 GB: 15-of-30   4.7 GB: 16-of-30   4.7 GB: 17-of-30   5.0 GB: 18-of-30   5.0 GB: 19-of-30   4.7 GB: 20-of-30   4.7 GB: 21-of-30   4.7 GB: 22-of-30   5.0 GB: 23-of-30   5.0 GB: 24-of-30   4.7 GB: 25-of-30   4.7 GB: 26-of-30   4.7 GB: 27-of-30   5.0 GB: 28-of-30   5.0 GB: 29-of-30   2.0 GB: 30-of-30
Model ArchitectureLlamaForCausalLM
Context Length131072
Model Max Length131072
Transformers Version4.51.1
Tokenizer ClassPreTrainedTokenizer
Padding Token<hono_pad>
Vocabulary Size128257
Torch Data Typebfloat16

Best Alternatives to Llama 3.1 70B FLDx2 Tess3 Abliterated Fusion Norm

Best Alternatives
Context / RAM
Downloads
Likes
... Chat 1048K Chinese Llama3 70B1024K / 141.9 GB90695
... Chat 1048K Chinese Llama3 70B1024K / 141.9 GB79144
... 3 70B Instruct Gradient 1048K1024K / 141.9 GB13122
Llama3 Function Calling 1048K1024K / 141.9 GB61
...a 3 70B Instruct Gradient 524K512K / 141.9 GB1023
...a 3 70B Instruct Gradient 262K256K / 141.9 GB11456
...ama 3 70B Arimas Story RP V2.0256K / 141.1 GB303
...ama 3 70B Arimas Story RP V1.6256K / 141.2 GB130
...ama 3 70B Arimas Story RP V1.5256K / 141.2 GB73
Yi 70B 200K RPMerge Franken195K / 142.4 GB31

Rank the Llama 3.1 70B FLDx2 Tess3 Abliterated Fusion Norm Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51648 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260327b