You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
Describe the bug
Here, the value of h'input'length'buf_ is modified on the CPU side, but there was a GPU operator still using this value for calculation. Do we not need to synchronize before the CPU side modification to ensure that the GPU operator has used up the data?
Reproduction
python ./benchmark/profile_throughput_audio.py llama model
Do we not need to synchronize before the CPU side modification to ensure that the GPU operator has used up the data?
Which GPU operator are you referring to?
Thank you for your reply. I just checked the code and it seems that lmdeploy did not manipulate this data on the GPU. We have added another kernel ourselves, so for our kernel, we can only modify the data after it has been executed.
Checklist
Describe the bug
Here, the value of h'input'length'buf_ is modified on the CPU side, but there was a GPU operator still using this value for calculation. Do we not need to synchronize before the CPU side modification to ensure that the GPU operator has used up the data?
Reproduction
python ./benchmark/profile_throughput_audio.py llama model
Environment
Error traceback
No response
The text was updated successfully, but these errors were encountered: