| Release Notes | | 
| Version: |  |  | Notes: | | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | 
 |  
| Version: | | gptq-4bit-32g-actorder_True | 
 |  | Notes: | | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | 
 |  
| Version: | | gptq-8bit--1g-actorder_True | 
 |  | Notes: | | 8-bit, with Act Order. No group size, to lower VRAM requirements. | 
 |  
| Version: | | gptq-8bit-128g-actorder_True | 
 |  | Notes: | | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | 
 |  
| Version: | | gptq-8bit-32g-actorder_True | 
 |  | Notes: | | 8-bit, with group size 32g and Act Order for maximum inference quality. | 
 |  
| Version: | | gptq-4bit-64g-actorder_True | 
 |  | Notes: | | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | 
 |  | 
 |