Bloomz 7b1 by newsrx

 ยป  All LLMs  ยป  newsrx  ยป  Bloomz 7b1   URL Share it on

Bloomz 7b1 is an open-source language model by newsrx. Features: LLM, VRAM: 14.1GB, License: bigscience-bloom-rail-1.0, LLM Explorer Score: 0.07.

  Arxiv:2211.01786   Ak   Ar   As   Bloom   Bm   Bn   Ca   Code   Dataset:bigscience/xp3   En   Endpoints compatible   Es   Eu   Fon   Fr   Gu   Hi   Id   Ig   Ki   Kn   Lg   Ln   Ml   Model-index   Mr   Ne   Nso   Ny   Or   Pa   Pt   Pytorch   Region:us   Rn   Rw   Sn   St   Sw   Ta   Te   Tn   Ts   Tum   Tw   Ur   Vi   Wo   Xh   Yo   Zh   Zu
Model Card on HF ๐Ÿค—: https://huggingface.co/newsrx/bloomz-7b1 

Bloomz 7b1 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Bloomz 7b1 (newsrx/bloomz-7b1)
๐ŸŒŸ Advertise your project ๐Ÿš€

Bloomz 7b1 Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
research
Applications:
prompting in English, crosslingual generalization tasks
Primary Use Cases:
translating, writing creative content
Limitations:
strictly inferior for non-multitask finetuned scenarios
Considerations:
Prompt requires clear stopping points to avoid continuation.
Additional Notes 
BLOOMZ models exhibit varied performance based on prompt engineering quality.
Supported Languages 
languages_supported (ak, ar, as, bm, bn, ca, code, en, es, eu, fon, fr, gu, hi, id, ig, ki, kn, lg, ln, ml, mr, ne, nso, ny, or, pa, pt, rn, rw, sn, st, sw, ta, te, tn, ts, tum, tw, ur, vi, wo, xh, yo, zh, zu), proficiency (capable of following human instructions in dozens of languages zero-shot.)
Training Details 
Data Sources:
bigscience/xP3
Data Volume:
4.19 billion finetuning tokens
Methodology:
Multitask finetuning on crosslingual task mixture (xP3).
Hardware Used:
64 A100 80GB GPUs with 8 GPUs per node (8 nodes)
Model Architecture:
Same as bloom-7b1.
Input Output 
Input Format:
text input prompts
Accepted Modalities:
text
Output Format:
text responses
Performance Tips:
Ensure proper prompt engineering to mark ending of inputs clearly.
LLM NameBloomz 7b1
Repository ๐Ÿค—https://huggingface.co/newsrx/bloomz-7b1 
Required VRAM14.1 GB
Updated2025-12-19
Maintainernewsrx
Model Typebloom
Model Files  14.1 GB
Supported Languagesak ar as bm bn ca code en es eu fr gu hi id ig ki kn lg ln ml mr ne ny or pa pt rn rw sn st sw ta te tn ts tw ur vi wo xh yo zh zu
Model ArchitectureBloomForCausalLM
Licensebigscience-bloom-rail-1.0
Transformers Version4.21.0.dev0
Tokenizer ClassBloomTokenizerFast
Padding Token<pad>
Vocabulary Size250880

Best Alternatives to Bloomz 7b1

Best Alternatives
Context / RAM
Downloads
Likes
Firefly Bloom 7b10K / 16.2 GB18011
Tiny Random BloomForCausalLM0K / 0 GB177280
Firefly Bloom 2b6 V20K / 5 GB8269
Tiny Random BloomForCausalLM0K / 0 GB539370
Bloom 1b1 RLHF V20K / 2.1 GB11630
Bloom 1b7 With Lm Head0K / 3.4 GB27560
ClinicalGPT Base Zh0K / 15.8 GB8150
DElefant0K / 12.5 GB64
Okapi Da Bloom0K / 16.2 GB32
Pixiu V1.00K / 32.3 GB22
Note: green Score (e.g. "73.2") means that the model is better than newsrx/bloomz-7b1.

Rank the Bloomz 7b1 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52509 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a