Mergekit Model Stock Bzcrthr is an open-source language model by mergekit-community. Features: 8b LLM, VRAM: 16.1GB, Context: 128K, Instruction-Based, Merged, LLM Explorer Score: 0.19.
Merged Model Arxiv:2403.19522 Autotrain compatible Base model:azazelle/llama-3-8b... Base model:dreadpoor/derivativ... Base model:eeeebbb2/3aff0ea7-4... Base model:grimjim/llama-3-ins... Base model:kik41/lora-length-l... Base model:kik41/lora-type-des... Base model:resplendentai/smart... Base model:sao10k/l3-8b-stheno... Base model:surya-narayanan/ana... Base model:surya-narayanan/bio... Base model:surya-narayanan/for... Base model:surya-narayanan/hea... Base model:surya-narayanan/hum... Base model:surya-narayanan/pro... Base model:vincentyandex/lora ... Conversational Endpoints compatible Instruct Llama Lora Region:us Safetensors Sharded Tensorflow
Mergekit Model Stock Bzcrthr Benchmarks
nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Mergekit Model Stock Bzcrthr Parameters and Internals
LLM Name Mergekit Model Stock Bzcrthr Repository ๐ค https://huggingface.co/mergekit-community/mergekit-model_stock-bzcrthr Base Model(s) Derivative 8B Model Stock ... Instruct Abliteration LoRA 8B Sao10K/L3-8B-Stheno-v3.2 Lora Length Long Llama 3 8B V2 Sao10K/L3-8B-Stheno-v3.2 surya-narayanan/biology Sao10K/L3-8B-Stheno-v3.2 vincentyandex/lora_llama3_chunked_novel_bs128 Sao10K/L3-8B-Stheno-v3.2 surya-narayanan/professional_psychology Sao10K/L3-8B-Stheno-v3.2 surya-narayanan/human_sexuality Sao10K/L3-8B-Stheno-v3.2 ...a7 4262 4abb 97b1 1879f340d32e Sao10K/L3-8B-Stheno-v3.2 surya-narayanan/health Sao10K/L3-8B-Stheno-v3.2 Smarts Llama3 Sao10K/L3-8B-Stheno-v3.2 surya-narayanan/formal_logic Sao10K/L3-8B-Stheno-v3.2 Azazelle/Llama-3-8B-Abomination-LORA Sao10K/L3-8B-Stheno-v3.2 surya-narayanan/anatomy Sao10K/L3-8B-Stheno-v3.2 kik41/lora-type-descriptive-llama-3-8b-v2 DreadPoor/Derivative-8B-Model_Stock grimjim/Llama-3-Instruct-abliteration-LoRA-8B Sao10K/L3-8B-Stheno-v3.2 kik41/lora-length-long-llama-3-8b-v2 Sao10K/L3-8B-Stheno-v3.2 surya-narayanan/biology Sao10K/L3-8B-Stheno-v3.2 vincentyandex/lora_llama3_chunked_novel_bs128 Sao10K/L3-8B-Stheno-v3.2 surya-narayanan/professional_psychology Sao10K/L3-8B-Stheno-v3.2 surya-narayanan/human_sexuality Sao10K/L3-8B-Stheno-v3.2 eeeebbb2/3aff0ea7-4262-4abb-97b1-1879f340d32e Sao10K/L3-8B-Stheno-v3.2 surya-narayanan/health Sao10K/L3-8B-Stheno-v3.2 ResplendentAI/Smarts_Llama3 Sao10K/L3-8B-Stheno-v3.2 surya-narayanan/formal_logic Sao10K/L3-8B-Stheno-v3.2 Azazelle/Llama-3-8B-Abomination-LORA Sao10K/L3-8B-Stheno-v3.2 surya-narayanan/anatomy Sao10K/L3-8B-Stheno-v3.2 kik41/lora-type-descriptive-llama-3-8b-v2 Merged Model Yes Model Size 8b Required VRAM 16.1 GB Updated 2025-09-23 Maintainer mergekit-community Model Type llama Instruction-Based Yes Model Files 5.0 GB: 1-of-4 5.0 GB: 2-of-4 4.9 GB: 3-of-4 1.2 GB: 4-of-4 Model Architecture LlamaForCausalLM Context Length 131072 Model Max Length 131072 Transformers Version 4.46.2 Tokenizer Class PreTrainedTokenizerFast Vocabulary Size 128256 LoRA Model Yes Torch Data Type bfloat16
Best Alternatives to Mergekit Model Stock Bzcrthr
Note: green Score (e.g. "73.2 ") means that the model is better than mergekit-community/mergekit-model_stock-bzcrthr .
Expand
Rank the Mergekit Model Stock Bzcrthr Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
Expand
Check out
Ag3ntum โ our secure, self-hosted AI agent for server management.
Release v20241124