New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix constant tagging in mps backend #3503
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/3503
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 25eae44 with merge base a116d89 (): NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D56941763 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Thanks @cccclai
Summary: Test with pytorch#3399 and this command passes ``` python -m examples.models.llama2.export_llama -kv --mps ``` Without this diff, it will error out ``` in _verify_exported_program_signature raise SpecViolationError( torch._export.verifier.SpecViolationError: Buffer output getitem_1 does not point to a buffer that exists. Dict of buffers that are mutated, in order: {'getitem_1': 'layers_0_attention_SDPA_kv_cache_k_cache', 'getitem': 'layers_0_attention_SDPA_kv_cache_v_cache', 'getitem_3': 'layers_1_attention_SDPA_kv_cache_k_cache', 'getitem_2': 'layers_1_attention_SDPA_kv_cache_v_cache', 'getitem_5': 'layers_2_attention_SDPA_kv_cache_k_cache', 'getitem_4': 'layers_2_attention_SDPA_kv_cache_v_cache', 'getitem_7': 'layers_3_attention_SDPA_kv_cache_k_cache', 'getitem_6': 'layers_3_attention_SDPA_kv_cache_v_cache', 'getitem_9': 'layers_4_attention_SDPA_kv_cache_k_cache', 'getitem_8': 'layers_4_attention_SDPA_kv_cache_v_cache'} Buffer nodes available: [] ``` The root cause is that by `is_parameter`, it tags all data including mutable buffers. Differential Revision: D56941763
This pull request was exported from Phabricator. Differential Revision: D56941763 |
Summary: Test with pytorch#3399 and this command passes ``` python -m examples.models.llama2.export_llama -kv --mps ``` Without this diff, it will error out ``` in _verify_exported_program_signature raise SpecViolationError( torch._export.verifier.SpecViolationError: Buffer output getitem_1 does not point to a buffer that exists. Dict of buffers that are mutated, in order: {'getitem_1': 'layers_0_attention_SDPA_kv_cache_k_cache', 'getitem': 'layers_0_attention_SDPA_kv_cache_v_cache', 'getitem_3': 'layers_1_attention_SDPA_kv_cache_k_cache', 'getitem_2': 'layers_1_attention_SDPA_kv_cache_v_cache', 'getitem_5': 'layers_2_attention_SDPA_kv_cache_k_cache', 'getitem_4': 'layers_2_attention_SDPA_kv_cache_v_cache', 'getitem_7': 'layers_3_attention_SDPA_kv_cache_k_cache', 'getitem_6': 'layers_3_attention_SDPA_kv_cache_v_cache', 'getitem_9': 'layers_4_attention_SDPA_kv_cache_k_cache', 'getitem_8': 'layers_4_attention_SDPA_kv_cache_v_cache'} Buffer nodes available: [] ``` The root cause is that by `is_parameter`, it tags all data including mutable buffers. Reviewed By: larryliu0820 Differential Revision: D56941763
This pull request was exported from Phabricator. Differential Revision: D56941763 |
This pull request has been merged in 50e9ee9. |
Summary:
Test with #3399 and this command passes
Without this diff, it will error out
The root cause is that by
is_parameter
, it tags all data including mutable buffers.Differential Revision: D56941763