Falcon Rw 1B 4bit by HeshamHaroon

 ยป  All LLMs  ยป  HeshamHaroon  ยป  Falcon Rw 1B 4bit   URL Share it on

  4-bit   4bit   Autotrain compatible   Custom code   En   Endpoints compatible   Falcon   Gptq   Pytorch   Quantized   Region:us   Safetensors

Falcon Rw 1B 4bit Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Falcon Rw 1B 4bit (HeshamHaroon/falcon-rw-1b-4bit)
๐ŸŒŸ Advertise your project ๐Ÿš€

Falcon Rw 1B 4bit Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
deployment scenarios with model size constraints
Considerations:
The quality of quantization may vary based on the dataset used for calibration; it's recommended to use a dataset closely related to the model's domain.
Additional Notes 
The integration is especially useful for deployment scenarios where model size is a constraint.
Supported Languages 
en (fluent)
Training Details 
Data Sources:
c4, wikitext2
LLM NameFalcon Rw 1B 4bit
Repository ๐Ÿค—https://huggingface.co/HeshamHaroon/falcon-rw-1b-4bit 
Model Size1b
Required VRAM0.8 GB
Updated2025-09-15
MaintainerHeshamHaroon
Model Typefalcon
Model Files  0.8 GB   0.8 GB
Supported Languagesen
Quantization Type4bit
Model ArchitectureFalconForCausalLM
Licenseapache-2.0
Model Max Length1024
Transformers Version4.32.0
Is Biased1
Tokenizer ClassGPT2Tokenizer
Vocabulary Size50304
Torch Data Typefloat16

Best Alternatives to Falcon Rw 1B 4bit

Best Alternatives
Context / RAM
Downloads
Likes
Neuralfalcon 1B V12K / 2.8 GB18000
Falcon Rw 1B Chat2K / 2.6 GB29553
Falcon Rw 1B Instruct Openorca2K / 2.6 GB184911
Falcon 1b Stage22K / 2.6 GB40173
Falcon 1b Stage12K / 2.6 GB25690
Falcon 1b Stage32K / 2.6 GB60
Falcon 1b Stage3 22K / 2.6 GB70
INTERS Falcon 1B2K / 0.7 GB121
Crow 1B Attempt12K / 5.3 GB643
HelpingAI Lite Chat2K / 2.6 GB44
Note: green Score (e.g. "73.2") means that the model is better than HeshamHaroon/falcon-rw-1b-4bit.

Rank the Falcon Rw 1B 4bit Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51387 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124