Superswallow 70B V0.1 by nitky

 ยป  All LLMs  ยป  nitky  ยป  Superswallow 70B V0.1   URL Share it on

  Merged Model   Arxiv:2306.01708   Arxiv:2311.03099   Arxiv:2311.10702   Autotrain compatible Base model:allenai/tulu-2-dpo-... Base model:tokyotech-llm/swall...   En   Endpoints compatible   Instruct   Ja   Llama   Region:us   Safetensors   Sharded   Tensorflow

Superswallow 70B V0.1 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Superswallow 70B V0.1 (nitky/Superswallow-70b-v0.1)
๐ŸŒŸ Advertise your project ๐Ÿš€

Superswallow 70B V0.1 Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
research, commercial applications
Limitations:
Potential bugs related to 'repetition_penalty' and 'temperature' settings
Considerations:
Adjust generation settings to improve performance.
Additional Notes 
This model integrates Tulu 2 DPO capabilities into the Swallow structure to enhance user intent following abilities, as part of a proof of concept for merging models trained in different languages.
Supported Languages 
en (proficient), ja (proficient)
Input Output 
Input Format:
Custom prompt format using Alpaca style templates
Performance Tips:
Ensure 'repetition_penalty' is set and avoid errors at low 'temperature'.
LLM NameSuperswallow 70B V0.1
Repository ๐Ÿค—https://huggingface.co/nitky/Superswallow-70b-v0.1 
Base Model(s)  Tulu 2 DPO 70B   Swallow 70B Instruct Hf   allenai/tulu-2-dpo-70b   tokyotech-llm/Swallow-70b-instruct-hf
Merged ModelYes
Model Size70b
Required VRAM138.4 GB
Updated2025-09-23
Maintainernitky
Model Typellama
Instruction-BasedYes
Model Files  9.6 GB: 1-of-15   10.0 GB: 2-of-15   9.8 GB: 3-of-15   9.7 GB: 4-of-15   9.6 GB: 5-of-15   9.8 GB: 6-of-15   10.0 GB: 7-of-15   9.8 GB: 8-of-15   9.8 GB: 9-of-15   10.0 GB: 10-of-15   9.8 GB: 11-of-15   9.8 GB: 12-of-15   9.8 GB: 13-of-15   9.7 GB: 14-of-15   1.2 GB: 15-of-15
Supported Languagesen ja
Model ArchitectureLlamaForCausalLM
Licensellama2
Context Length4096
Model Max Length4096
Transformers Version4.36.2
Tokenizer ClassLlamaTokenizer
Vocabulary Size43176
Torch Data Typebfloat16

Best Alternatives to Superswallow 70B V0.1

Best Alternatives
Context / RAM
Downloads
Likes
... Chat 1048K Chinese Llama3 70B1024K / 141.9 GB90695
... Chat 1048K Chinese Llama3 70B1024K / 141.9 GB85264
... 3 70B Instruct Gradient 1048K1024K / 141.9 GB13122
Llama3 Function Calling 1048K1024K / 141.9 GB61
...a 3 70B Instruct Gradient 524K512K / 141.9 GB1023
...a 3 70B Instruct Gradient 262K256K / 141.9 GB11456
...ama 3 70B Arimas Story RP V2.0256K / 141.1 GB263
...ama 3 70B Arimas Story RP V1.6256K / 141.2 GB130
...ama 3 70B Arimas Story RP V1.5256K / 141.2 GB463
Llama 3.1 70B Instruct128K / 141.9 GB888766849
Note: green Score (e.g. "73.2") means that the model is better than nitky/Superswallow-70b-v0.1.

Rank the Superswallow 70B V0.1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51557 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124