You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Issue:
When deploying the nbeatsx model and making use of all the available data up to that date as training set (which should increase performance), I do not have any other future values. So how do I know after how many training epochs to stop?
Note:
I am fairly new in the game and might be overseeing something, so happy to take on any correction.
Thank you for your consideration and continuous work on improving the nixtla univserse and, with that, the state of forecasting!
Use case
Context:
In the way early stopping is currently implemented, I do not have the possibility to make use of all available data for training. This is striking as often the most recent data (say last week) has the strongest predictive power to forecast the next period (say next week). The original nbeatsx paper here proposes a random early stopping set as a solution (being a random period over the updated training set). This could be a very useful feature to permit the deployment of nbeatsx model in production with early stopping and use of all the available data as training data.
The text was updated successfully, but these errors were encountered:
@cchallu: as you suggested I submitted an issue for the random early stopping set here (as in the NBEATSx paper). If there are any questions please let me know and I will be happy to provide more context.
@steffenrunge I think you can use cross-validation to perform what you want. Cross-validation will at the end refit the model with the entire training data based on the best hyperparams.
Description
Issue:
When deploying the
nbeatsx
model and making use of all the available data up to that date astraining set
(which should increase performance), I do not have any other future values. So how do I know after how many training epochs to stop?Note:
I am fairly new in the game and might be overseeing something, so happy to take on any correction.
Thank you for your consideration and continuous work on improving the nixtla univserse and, with that, the state of forecasting!
Use case
Context:
In the way
early stopping
is currently implemented, I do not have the possibility to make use of all available data for training. This is striking as often the most recent data (say last week) has the strongest predictive power to forecast the next period (say next week). The original nbeatsx paper here proposes a random early stopping set as a solution (being a random period over the updated training set). This could be a very useful feature to permit the deployment of nbeatsx model in production with early stopping and use of all the available data as training data.The text was updated successfully, but these errors were encountered: