New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change custom_skip_targets meaning for constant_prop_pass #3491
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/3491
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit fe6193e with merge base c83af25 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D56894215 |
This pull request was exported from Phabricator. Differential Revision: D56894215 |
cd65665
to
dde63a1
Compare
Summary: Pull Request resolved: pytorch#3491 Some users of `constant_prop_pass` want to fold across calls to `full`, because representing a tensor as a program constant is a requirement for some backends. This came up when writing some tests using `torch.ones` as a weight tensor, which is represented as `aten.full` in Edge Dialect. When the user specifies a custom skip set, do *not* add the default `aten.full` function, in case the user doesn't want it. Differential Revision: D56894215
dde63a1
to
e4be6da
Compare
Summary: Some users of `constant_prop_pass` want to fold across calls to `full`, because representing a tensor as a program constant is a requirement for some backends. This came up when writing some tests using `torch.ones` as a weight tensor, which is represented as `aten.full` in Edge Dialect. When the user specifies a custom skip set, do *not* add the default `aten.full` function, in case the user doesn't want it. Differential Revision: D56894215
This pull request was exported from Phabricator. Differential Revision: D56894215 |
e4be6da
to
a5829ab
Compare
Summary: Some users of `constant_prop_pass` want to fold across calls to `full`, because representing a tensor as a program constant is a requirement for some backends. This came up when writing some tests using `torch.ones` as a weight tensor, which is represented as `aten.full` in Edge Dialect. When the user specifies a custom skip set, do *not* add the default `aten.full` function, in case the user doesn't want it. Reviewed By: angelayi Differential Revision: D56894215
This pull request was exported from Phabricator. Differential Revision: D56894215 |
Summary: Some users of `constant_prop_pass` want to fold across calls to `full`, because representing a tensor as a program constant is a requirement for some backends. This came up when writing some tests using `torch.ones` as a weight tensor, which is represented as `aten.full` in Edge Dialect. When the user specifies a custom skip set, do *not* add the default `aten.full` function, in case the user doesn't want it. Reviewed By: angelayi Differential Revision: D56894215
a5829ab
to
fe6193e
Compare
This pull request was exported from Phabricator. Differential Revision: D56894215 |
This pull request has been merged in b93b7ae. |
Summary:
Some users of
constant_prop_pass
want to fold across calls tofull
, because representing a tensor as a program constant is a requirementfor some backends.
This came up when writing some tests using
torch.ones
as a weight tensor,which is represented as
aten.full
in Edge Dialect.When the user specifies a custom skip set, do not add the default
aten.full
function, in case the user doesn't want it.
Differential Revision: D56894215