Merge Passthrough Meta Llama 3 Instruct 10B by wassemgtk

 ยป  All LLMs  ยป  wassemgtk  ยป  Merge Passthrough Meta Llama 3 Instruct 10B   URL Share it on

  Merged Model   Autotrain compatible Base model:finetune:meta-llama... Base model:meta-llama/meta-lla...   Conversational   Endpoints compatible   Instruct   Llama   Region:us   Safetensors   Sharded   Tensorflow

Merge Passthrough Meta Llama 3 Instruct 10B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Merge Passthrough Meta Llama 3 Instruct 10B (wassemgtk/merge-passthrough-Meta-Llama-3-Instruct-10B)
๐ŸŒŸ Advertise your project ๐Ÿš€

Merge Passthrough Meta Llama 3 Instruct 10B Parameters and Internals

Additional Notes 
The configuration used for merging included slices for specific layer ranges and used the passthrough merge method with bfloat16 dtype.
Release Notes 
Notes:
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). Merge Method: This model was merged using the passthrough merge method. Models Merged: The following models were included in the merge: meta-llama/Meta-Llama-3-8B-Instruct
LLM NameMerge Passthrough Meta Llama 3 Instruct 10B
Repository ๐Ÿค—https://huggingface.co/wassemgtk/merge-passthrough-Meta-Llama-3-Instruct-10B 
Base Model(s)  Meta Llama 3 8B Instruct   meta-llama/Meta-Llama-3-8B-Instruct
Merged ModelYes
Model Size8b
Required VRAM19.6 GB
Updated2025-10-02
Maintainerwassemgtk
Model Typellama
Instruction-BasedYes
Model Files  10.0 GB: 1-of-2   9.6 GB: 2-of-2
Model ArchitectureLlamaForCausalLM
Licensellama3
Context Length8192
Model Max Length8192
Transformers Version4.39.3
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Merge Passthrough Meta Llama 3 Instruct 10B

Best Alternatives
Context / RAM
Downloads
Likes
...otron 8B UltraLong 4M Instruct4192K / 32.1 GB2522120
...a 3.1 8B UltraLong 4M Instruct4192K / 32.1 GB17624
UltraLong Thinking4192K / 16.1 GB453
...otron 8B UltraLong 2M Instruct2096K / 32.1 GB114215
...a 3.1 8B UltraLong 2M Instruct2096K / 32.1 GB8759
...otron 8B UltraLong 1M Instruct1048K / 32.1 GB714252
...a 3.1 8B UltraLong 1M Instruct1048K / 32.1 GB138729
Zero Llama 3.1 8B Beta61048K / 16.1 GB21
...dger Nu Llama 3.1 8B UltraLong1048K / 16.2 GB53
....1 1million Ctx Dark Planet 8B1048K / 32.3 GB83
Note: green Score (e.g. "73.2") means that the model is better than wassemgtk/merge-passthrough-Meta-Llama-3-Instruct-10B.

Rank the Merge Passthrough Meta Llama 3 Instruct 10B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51540 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124