Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bidirectional LSTM #24

Open
TinaB19 opened this issue May 8, 2017 · 2 comments
Open

Bidirectional LSTM #24

TinaB19 opened this issue May 8, 2017 · 2 comments

Comments

@TinaB19
Copy link

TinaB19 commented May 8, 2017

I tried to use bidirectional lstm with merge_mode='sum' for encoding, but when I try to predict headlines, the model barely generates anything; However, the loss is lower than when I use the simple lstm. This is the only change that I made. Do you know why this happens?

@udibr
Copy link
Owner

udibr commented May 8, 2017

generating with BiLSTM is tricky. In generation you move forward step by step generating one word at a time. However the BiLSTM model requires you to move both forward and backward.

If you want to use BiLSTM you can use it on the steps that are running over the article content and then run just a forward LSTM on the headline so latter you can generate your own headline from this forward only pass

@TinaB19
Copy link
Author

TinaB19 commented May 17, 2017

Can this be done by changing the current model? Considering that you feed both description and headline as input to a same model? I tried to do it myself but had no luck.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants