New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: InformerModel, decoder_input torch.cat size of tensor mismatch error otherwise #30750
Comments
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
cc @kashif |
@jhzsquared so the intention was that the model is learning the next step's distribution given the past as well as the covariates up till the time step at which one is forecasting... can you paste in your |
I'm not using any lag right now, so have an initial model input And thanks! Conceptually that makes sense... functionally though, when |
right so if you dont want lags set that array to [1], and increase you context lengh by 1 more time step...can you check if that works? |
Ohh okay did not realize that should have been |
Possible solutions: Should shift=0, at L2020?
Referencing: https://github.com/huggingface/transformers/blame/4fdf58afb72b0754da30037fc800b6044e7d9c99/src/transformers/models/informer/modeling_informer.py#L2020
I've trained/tested an Informer model but when generating the prediction, run into a "RuntimeError: Sizes of tensors must match except in Dimension 2..." when running line 2029 in
modeling_informer.py
.I broke it apart a bit, and after playing around, it looks like shift=1 in Line 2020 may have been mistakenly hardcoded? Otherwise, the tensor shape for
reshaped_lagged_sequence
at dimension 1 will always be one less thanrepeated_features
.Alternatively of course,
repeated_features
could not use k+1 in L2026. I'm not clear on the author's intuition behind the shift vs not.The text was updated successfully, but these errors were encountered: