Zephyr Quiklang 3B by Walmart-the-bag

 ยป  All LLMs  ยป  Walmart-the-bag  ยป  Zephyr Quiklang 3B   URL Share it on

Base model:finetune:stabilitya... Base model:stabilityai/stablel...   Causal lm   Conversational   Custom code   Dataset:teknium/openhermes Dataset:unalignment/toxic-dpo-...   Feature-extraction   Pytorch   Region:us   Safetensors   Stablelm epoch

Zephyr Quiklang 3B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Zephyr Quiklang 3B (Walmart-the-bag/zephyr-quiklang-3b)
๐ŸŒŸ Advertise your project ๐Ÿš€

Zephyr Quiklang 3B Parameters and Internals

Model Type 
causal_lm
Training Details 
Data Sources:
teknium/openhermes, unalignment/toxic-dpo-v0.1
Data Volume:
10000 samples
Context Length:
1024
Hardware Used:
1xA6000-48GB
LLM NameZephyr Quiklang 3B
Repository ๐Ÿค—https://huggingface.co/Walmart-the-bag/zephyr-quiklang-3b 
Base Model(s)  Stablelm Zephyr 3B   stabilityai/stablelm-zephyr-3b
Model Size3b
Required VRAM5.9 GB
Updated2025-09-17
MaintainerWalmart-the-bag
Model Typestablelm_epoch
Model Files  5.9 GB   5.9 GB   0.0 GB
Model ArchitectureStableLMEpochForCausalLM
Licenseother
Context Length4096
Model Max Length4096
Transformers Version4.34.1
Tokenizer ClassGPTNeoXTokenizer
Padding Token<|endoftext|>
Vocabulary Size50304
Torch Data Typefloat16

Quantized Models of the Zephyr Quiklang 3B

Model
Likes
Downloads
VRAM
Zephyr Quiklang 3B GGUF4791 GB
Zephyr Quiklang 3B GPTQ141 GB

Best Alternatives to Zephyr Quiklang 3B

Best Alternatives
Context / RAM
Downloads
Likes
Stable Code 3B Mlx16K / 5.6 GB221
Aura 3B4K / 5.6 GB32
Slim Extract4K / 5.6 GB1312
Slim Boolean4K / 5.6 GB84
Slim Sa Ner4K / 5.6 GB206
Slim Tags 3B4K / 5.6 GB74
Slim Summary4K / 5.6 GB98
Slim Xsum4K / 5.6 GB96
Tofu 3B4K / 5.6 GB32
Memphis CoT 3B4K / 5.6 GB630
Note: green Score (e.g. "73.2") means that the model is better than Walmart-the-bag/zephyr-quiklang-3b.

Rank the Zephyr Quiklang 3B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51415 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124