Skip to content
This repository has been archived by the owner on Nov 8, 2022. It is now read-only.

Performance issue in the definition of _inference, examples/memn2n_dialogue/memn2n_dialogue.py(P1) #227

Open
DLPerf opened this issue Aug 20, 2021 · 2 comments
Labels
bug Something isn't working

Comments

@DLPerf
Copy link

DLPerf commented Aug 20, 2021

Hello, I found a performance issue in the difinition of _inference, examples/memn2n_dialogue/memn2n_dialogue.py, tf.nn.embedding_lookup will be calculated repeately during the program execution, resulting in reduced efficiency. So I think it should be created before the loop.

The same issue exist in tf.reduce_sum in line 187 and 200.

Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

@DLPerf DLPerf added the bug Something isn't working label Aug 20, 2021
@DLPerf
Copy link
Author

DLPerf commented Sep 3, 2021

@danielkorat Hi, my friend, could you consider my issue?

@peteriz
Copy link
Member

peteriz commented Sep 5, 2021

Hi @DLPerf, thanks for letting us know about the issue. You can open a PR to fix the issue if you wish. Any contribution is welcome.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants