Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyTorch 2.0 support #262

Open
kxzxvbk opened this issue Mar 27, 2023 · 2 comments
Open

PyTorch 2.0 support #262

kxzxvbk opened this issue Mar 27, 2023 · 2 comments

Comments

@kxzxvbk
Copy link

kxzxvbk commented Mar 27, 2023

Great work!
But when I use torch==2.0.0, I find that compilation for ViT fails. I get a warning:
[2023-03-27 12:49:31,505] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (64)
function: 'forward' (/opt/conda/lib/python3.8/site-packages/vit_pytorch/vit.py:19)
reasons: ___check_obj_id(self, 140301176861456)
to diagnose recompilation issues, see https://pytorch.org/docs/master/dynamo/troubleshooting.html.
This indicates that pytorch gives up compiling the model. Why does it happens and is there any solutions?
Thanks!

@andreamad8
Copy link

+1

@uniartisan
Copy link

uniartisan commented Apr 19, 2023

Face the same problem.

https://mmdetection.readthedocs.io/zh_CN/latest/notes/faq.html

this may help.

Input images to the network are fixed shape, not multi-scale.
set torch._dynamo.config.cache_size_limit parameter. TorchDynamo will convert and cache the Python bytecode, and the compiled functions will be stored in the cache. When the next check finds that the function needs to be recompiled, the function will be recompiled and cached. However, if the number of recompilations exceeds the maximum value set (64), the function will no longer be cached or recompiled. As mentioned above, the loss calculation and post-processing parts of the object detection algorithm are also dynamically calculated, and these functions need to be recompiled every time. Therefore, setting the torch._dynamo.config.cache_size_limit parameter to a smaller value can effectively reduce the compilation time
In MMDetection, you can set the torch._dynamo.config.cache_size_limit parameter through the environment variable DYNAMO_CACHE_SIZE_LIMIT. For example, the command is as follows:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants