Model Type |
| ||||||||||||||||||
Use Cases |
| ||||||||||||||||||
Additional Notes |
| ||||||||||||||||||
Supported Languages |
| ||||||||||||||||||
Training Details |
| ||||||||||||||||||
Input Output |
|
LLM Name | Fine Tuned Codegen 16B Verilog |
Repository ๐ค | https://huggingface.co/shailja/fine-tuned-codegen-16B-Verilog |
Model Size | 16b |
Required VRAM | 32.2 GB |
Updated | 2025-07-30 |
Maintainer | shailja |
Model Type | codegen |
Model Files | |
Generates Code | Yes |
Model Architecture | CodeGenForCausalLM |
License | bigcode-openrail-m |
Transformers Version | 4.22.0.dev0 |
Tokenizer Class | GPT2Tokenizer |
Vocabulary Size | 50295 |
Torch Data Type | float16 |
Activation Function | gelu_new |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Instruct Codegen 16B | 0K / 32.2 GB | 7 | 21 |
Codegen 16B Mono Toolbench | 0K / 128.4 GB | 6 | 5 |
Codegen2 16B P | 0K / 64.3 GB | 12 | 45 |
Codegen 16B Multi 6 Parts | 0K / 32.2 GB | 6 | 0 |
Codegen 16B Nl Sharded | 0K / 32.1 GB | 8 | 7 |
Codegen 16B Nl | 0K / 32.2 GB | 1121 | 18 |
Codegen 16B Mono | 0K / 32.2 GB | 211 | 126 |
Codegen 16B Multi | 0K / 32.2 GB | 113 | 119 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐