Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adaround meaningless runtime error msg with aimet_onnx #2896

Open
escorciav opened this issue Apr 19, 2024 · 6 comments
Open

Adaround meaningless runtime error msg with aimet_onnx #2896

escorciav opened this issue Apr 19, 2024 · 6 comments

Comments

@escorciav
Copy link

Any tip for debugging Aborted core dumped with aimet_onnx adaround API?
AFAIK those are typical error between a Python<=>Cpp/binary interface

Thanks in advance. Let's engage!

2024-04-19 13:29:15,670 - root - INFO - Adaround starts...
2024-04-19 13:35:02,198 - Quant - INFO - No config file provided, defaulting to config file at /home/FUNNYLOCAL/c.wafflehouse/projects/on-device-sr/gitSR/venv_aimet/lib/python3.8/site-packages/aimet_common/quantsim_config/default_config.json
2024-04-19 13:35:02,311 - Quant - INFO - Selecting DefaultOpInstanceConfigGenerator to compute the specialized config. hw_version:default                                                    aimet_common/
2024-04-19 13:35:03,501 - Utils - INFO - Caching 3000 batches from data loader at path location: /tmp/adaround/
2024-04-19 13:35:03,954 - Quant - INFO - Started Optimizing weight rounding of module: Conv_55                                                                                                            
  0%|                                                                                                                                                                            | 0/3751 [00             :00<?, ?it/s]/home/FUNNYLOCAL/c.wafflehouse/projects/on-device-sr/gitSR/venv_aimet/lib/python3.8/site-packages/aimet_onnx/adaround/adaround_optimizer.py:131: UserWarning: The given NumPy array :00<?, ?it/s]is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copble, and PyToy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at  ../tor data or makech/csrc/utils/tensor_numpy.cpp:180.)
  weights = torch.from_numpy(numpy_helper.to_array(module.params['weight'].tensor)).to(torch_device)
python: /home/ubuntu/workspace/AIMET_PreBuilt_Release/onnx-gpu/aimet/ModelOptimizations/DlQuantization/src/TensorQuantizer.cpp:174: virtual void DlQuantization::TensorQuantizer::quantizeDequantize(constuantize(const float*, std::size_t, float*, double, double, unsigned int, bool, void*): Assertion `isEncodingValid' failed.
Aborted (core dumped)
@escorciav
Copy link
Author

Current approach: Go for coffee/tea, keep calm & import ipdb; ipdb.set_trace()
image

@quic-mangal
Copy link
Contributor

@escorciav, have a few questions,

  1. Is Conv_55 your first layer?
  2. Before it enters- AdaroundOptimizer.adaround_module, could you check if all weight encodings are present & valid?

@escorciav
Copy link
Author

escorciav commented Apr 23, 2024

  1. Yes, Conv_5 is the 1st layer.

Regarding 2,

  1. Where should I set the breakpoint? It's OK to do it inside the method adaround_module?
  2. Can you gimme an example of what a valid & invalid weight encoding?

@quic-mangal
Copy link
Contributor

Valid encoding would be a libpymo.TfEncoding object with min, max, bw, scale/delta, offset.

@escorciav
Copy link
Author

Thanks for your support.

I passed using aimet_onnx adaround to a colleague as I'm interested in QAT. Thus, gotta use aimet_torch or aimet_tf, right?

I may give a shot to debug this if I get some spare dev cycles. if unattended, feel free to close the issue in 6 months?
Cheers!

@quic-mangal
Copy link
Contributor

I would suggest going for aimet_torch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants