You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm attempting to train a model for a micro-controller that only supports 8-bit precision or lower.
This works perfectly when training using your QuantizationAwareTrainingConfig.
In addition to this we also want to prune the network to also reduce the number of parameters in our model.
Luckily, the prepare_compression method accepts multiple configurations to be passed to it, so I attempted to also introduce a WeightPruningConfig.
This fails however with the following traceback:
Traceback (most recent call last):
File "test.py", line 8, in <module>
compression_manager.callbacks.on_train_end()
File "/lib/python3.11/site-packages/neural_compressor/training.py", line 420, in on_train_end
callbacks.on_train_end()
File "/lib/python3.11/site-packages/neural_compressor/compression/callbacks.py", line 226, in on_train_end
get_sparsity_ratio(self.pruners, self.model)
File "/lib/python3.11/site-packages/neural_compressor/compression/pruner/utils.py", line 145, in get_sparsity_ratio
linear_conv_cnt += module.weight.numel()
^^^^^^^^^^^^^^^^^^^
AttributeError: 'function' object has no attribute 'numel'
I was wondering if this is supposed to be a supported use case, and I'm doing something wrong, or is combining multiple compression methods not yet supported?
The following code can be used to minimally reproduce the error:
from neural_compressor import QuantizationAwareTrainingConfig
from neural_compressor.training import prepare_compression, WeightPruningConfig
from timm.models import create_model
quant_config = QuantizationAwareTrainingConfig()
prune_config = WeightPruningConfig([{"start_step": 1, "end_step": 10000}])
compression_manager = prepare_compression(create_model("resnet50"), [quant_config, prune_config])
compression_manager.callbacks.on_train_begin()
compression_manager.callbacks.on_train_end()
The text was updated successfully, but these errors were encountered:
I will take a look into your proposed situation combined with quantization and pruning. If it is a bug, I will try to fix it. Furthermore, I would like to note that our most pruning process requires an additional training process, which may cause weight shift and make your aforementioned quantization process invalid. If you have any further questions, please do not hesitate to contact us.
Thank you for looking into this!
I am indeed aware that the pruning process would require additional training, but would it not be possible to do this training quantization-aware?
It would not be a problem that the low-precision model weights would shift during the pruning process, that was more my intention actually.
Thanks for your further information. I think that combining quantization and pruning in one training process seems to make sense. I will investigate this situation and find out if we currently support this configuration or not.
Hello,
I'm attempting to train a model for a micro-controller that only supports 8-bit precision or lower.
This works perfectly when training using your
QuantizationAwareTrainingConfig
.In addition to this we also want to prune the network to also reduce the number of parameters in our model.
Luckily, the
prepare_compression
method accepts multiple configurations to be passed to it, so I attempted to also introduce aWeightPruningConfig
.This fails however with the following traceback:
I was wondering if this is supposed to be a supported use case, and I'm doing something wrong, or is combining multiple compression methods not yet supported?
The following code can be used to minimally reproduce the error:
The text was updated successfully, but these errors were encountered: