Opt 125M 4bit 128g by Technotech

 ยป  All LLMs  ยป  Technotech  ยป  Opt 125M 4bit 128g   URL Share it on

  Arxiv:2005.14165   Arxiv:2205.01068   4bit   Autotrain compatible   En   Gptq   Opt   Quantized   Region:us

Opt 125M 4bit 128g Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
๐ŸŒŸ Advertise your project ๐Ÿš€

Opt 125M 4bit 128g Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
research, commercial applications
Primary Use Cases:
evaluation of downstream tasks, text generation
Limitations:
bias, safety issues, generation diversity, hallucination
Considerations:
Currently a lot of unfiltered content from the internet which is far from neutral.
Additional Notes 
OPT was quantised to 4bit using AutoGPTQ with a group size of 128, no act order.
Supported Languages 
English (predominantly)
Training Details 
Data Sources:
BookCorpus, CC-Stories, The Pile, Pushshift.io Reddit dataset, CCNewsV2
Data Volume:
180B tokens
Methodology:
Self-supervised causal language modeling
Context Length:
2048
Training Time:
~33 days
Hardware Used:
992 80GB A100 GPUs
Model Architecture:
GPT3-like, decoder-only
Input Output 
Input Format:
Sequences of 2048 consecutive tokens
Accepted Modalities:
text
Output Format:
Generated text
LLM NameOpt 125M 4bit 128g
Repository ๐Ÿค—https://huggingface.co/Technotech/opt-125m-4bit-128g 
Model Size125m
Required VRAM0.2 GB
Updated2025-06-09
MaintainerTechnotech
Model Typeopt
Model Files  0.2 GB
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq|4bit
Model ArchitectureOPTForCausalLM
Licenseother
Context Length2048
Model Max Length2048
Transformers Version4.29.2
Vocabulary Size50272
Torch Data Typefloat16
Activation Functionrelu
Opt 125M 4bit 128g (Technotech/opt-125m-4bit-128g)

Best Alternatives to Opt 125M 4bit 128g

Best Alternatives
Context / RAM
Downloads
Likes
Opt 125M Gptq2K / 0.1 GB40650
Opt 125M Gptq2K / 0.1 GB130
Quantized Opt 125M2K / 0.2 GB160
Opt 125M Gptq 4bit2K / 0.1 GB130
Opt 125M Gptq 4bit2K / 0.1 GB332750
Slc Opt 125M Gptq2K / 0.1 GB160
Opt 125M GPTQ 22K / 0.1 GB121
Opt 125M Gptq 4bits2K / 0.1 GB231
Opt 125M Gptq2K / 0.1 GB171
Opt 125M Gptq 4bit2K / 0.1 GB120
Note: green Score (e.g. "73.2") means that the model is better than Technotech/opt-125m-4bit-128g.

Rank the Opt 125M 4bit 128g Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 48046 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124