Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NBEATSx: Retraining with the complete data including validation set has the issue of not being able to do *early stopping* #883

Open
steffenrunge opened this issue Jan 30, 2024 · 2 comments

Comments

@steffenrunge
Copy link

Description

Issue:
When deploying the nbeatsx model and making use of all the available data up to that date as training set (which should increase performance), I do not have any other future values. So how do I know after how many training epochs to stop?

Note:
I am fairly new in the game and might be overseeing something, so happy to take on any correction.

Thank you for your consideration and continuous work on improving the nixtla univserse and, with that, the state of forecasting!

Use case

Context:
In the way early stopping is currently implemented, I do not have the possibility to make use of all available data for training. This is striking as often the most recent data (say last week) has the strongest predictive power to forecast the next period (say next week). The original nbeatsx paper here proposes a random early stopping set as a solution (being a random period over the updated training set). This could be a very useful feature to permit the deployment of nbeatsx model in production with early stopping and use of all the available data as training data.

@steffenrunge
Copy link
Author

@cchallu: as you suggested I submitted an issue for the random early stopping set here (as in the NBEATSx paper). If there are any questions please let me know and I will be happy to provide more context.

@elephaint
Copy link
Contributor

elephaint commented May 7, 2024

@steffenrunge I think you can use cross-validation to perform what you want. Cross-validation will at the end refit the model with the entire training data based on the best hyperparams.

Let me know if that solves the problem for you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants