-
Notifications
You must be signed in to change notification settings - Fork 7.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
export_model.py crashes with keypoints #5255
Comments
@Huxwell I see, it really is weird. It hasn't been long since I ran a keypoint onnx with onnxruntime-gpu.
I do not remember much but I think I got success with aten_fallback. |
No luck so far, with ONNX_ATEN_FALLBACK not changing anything for me and ONNX_ATEN resulting in export crash
I will read comment section of my '/Apr2024Detectron_venv/lib/python3.8/site-packages/torch/onnx/utils.py', run the unit tests from detectron2 for export and analyze the logs and hope I will find some clue there. In the meantime, do you happen to have a converted .onnx file from vanilla keypoints detector, that you could share? It would allow me to verify if my problem is with the exporter (Loops in keypoints related code) or rather with my onnxruntime usage/version. |
Just to clarify: ONNX_FALLTHROUGH successfully generates an onnx file, but onnxruntime crushes when reading such file with
|
Managed to successfully bump STABLE_ONNX_OPSET_VERSION from 11 to 16 and 17 (eliminating warning about RoIAlign). The petty issue is I was changing the version in cloned version of detectron2, rather than the one installed by pip in a venv. I have mocked keypoints computation and succeeded in running the model in onnxruntime.
so it's onnx compatible, working on that. |
@Huxwell Glad to hear you could solve it! I tried to run Detectron2 as it is on my Ubuntu 22.04, it will run with cpu but won't run with cuda haha. Let me know if something else happens. |
Ok, now I am able to run correctly in onnxruntime, with reasonable predictions (even with custom models, using custom number of keypoints instead of 17, r18/r34 backbone instead of r50, using my weights rather than pretrained etc).
|
@RajUpadhyay FYI I am able now to run keypoints prediction in TensorRT (heatmaps -> keypoints + repositioning happens in numpy in postprocessing), describing my process with a little bit more details in TensorRT issue : NVIDIA/TensorRT#3792 |
@Huxwell Wow, congrats! So glad you could do it! |
Sorry, I asked and apparently my company policy doesn't allow me to, but I think the snippets from the questions I asked in these issues (mostly the roi_head() function) should be enough for you to reproduce the effect relatively easily. |
EDIT: I am discussing export_model.py issues with keypoints in : #5143 since it receives more attention.
Instructions To Reproduce the 馃悰 Bug:
run, such as a private dataset.
Expected behavior:
An onnx file generated that I can :
Environment:
Provide your environment information using the following command:
The text was updated successfully, but these errors were encountered: