Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add some notes to these models, especially the corresponding paper links #443

Open
XYZOK000 opened this issue Nov 9, 2021 · 2 comments
Open
Labels
documentation Improvements or additions to documentation

Comments

@XYZOK000
Copy link

XYZOK000 commented Nov 9, 2021

Could you add some notes to these models, especially the corresponding paper links? For example, in this paper, Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting, I don't know which model to use.
Thank you.
pytorch_model_dict = {
"MultiAttnHeadSimple": MultiAttnHeadSimple,
"SimpleTransformer": SimpleTransformer,
"TransformerXL": TransformerXL,
"DummyTorchModel": DummyTorchModel,
"LSTM": LSTMForecast,
"SimpleLinearModel": SimpleLinearModel,
"CustomTransformerDecoder": CustomTransformerDecoder,
"DARNN": DARNN,
"DecoderTransformer": DecoderTransformer,
"BasicAE": AE,
"Informer": Informer
}

@isaacmg
Copy link
Collaborator

isaacmg commented Nov 12, 2021

Yeah this should probably be updated on the ReadMe as well but

Vanilla LSTM->LSTM: A basic LSTM that is suitable for multivariate time series forecasting and transfer learning.
Full transformer -> SimpleTransformer: The full original transformer with all 8 encoder and decoder blocks. Requires passing the target in at inference.
Simple Multi-Head Attention->MultiAttnHeadSimple: A simple multi-head attention block and linear embedding layers. Suitable for transfer learning.
Transformer with a linear decoder->CustomTransformerDecoder: A transformer with n-encoder blocks (this is tunable) and a linear decoder.
DA-RNN:->ARNN A well rounded model with which utilizes a LSTM + attention.
Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting -> DecoderTransformer:
Transformer XL (not fully supported yet):
Informer -> "Informer": Beyond Efficient Transformer for Long Sequence Time-Series Forecasting
DeepAR -> DeepAR (only available on certain branches"

So it is called DecoderTransformer in the model_dict

@XYZOK000
Copy link
Author

XYZOK000 commented Nov 13, 2021 via email

@isaacmg isaacmg added the documentation Improvements or additions to documentation label Nov 22, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

2 participants