Model Type |
| ||||||||||||||||||
Use Cases |
| ||||||||||||||||||
Additional Notes |
| ||||||||||||||||||
Supported Languages |
| ||||||||||||||||||
Training Details |
| ||||||||||||||||||
Input Output |
|
LLM Name | Fine Tuned Codegen 16B Verilog |
Repository ๐ค | https://huggingface.co/shailja/fine-tuned-codegen-16B-Verilog |
Model Size | 16b |
Required VRAM | 32.2 GB |
Updated | 2025-06-09 |
Maintainer | shailja |
Model Type | codegen |
Model Files | |
Generates Code | Yes |
Model Architecture | CodeGenForCausalLM |
License | bigcode-openrail-m |
Transformers Version | 4.22.0.dev0 |
Tokenizer Class | GPT2Tokenizer |
Vocabulary Size | 50295 |
Torch Data Type | float16 |
Activation Function | gelu_new |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Instruct Codegen 16B | 0K / 32.2 GB | 31 | 21 |
Codegen 16B Mono Toolbench | 0K / 128.4 GB | 37 | 5 |
Codegen2 16B P | 0K / 64.3 GB | 149 | 45 |
Codegen 16B Multi 6 Parts | 0K / 32.2 GB | 25 | 0 |
Codegen 16B Nl Sharded | 0K / 32.1 GB | 25 | 7 |
Codegen 16B Nl | 0K / 32.2 GB | 193 | 18 |
Codegen 16B Multi | 0K / 32.2 GB | 371 | 120 |
Codegen 16B Mono | 0K / 32.2 GB | 295 | 126 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐