-
Notifications
You must be signed in to change notification settings - Fork 809
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hyperparameter TiDE and TFT no complete trials #2363
Comments
Hi @flight505, In general, You can also check the FAQ section of the documentation. |
Hey @madtoinou, thank you for reaching out. It appears that during the evaluation, there were issues with the varying length of the patient series. I included a custom SMAPE and conducted an evaluation over the patient series. However, the validation loss is quite high. Ideally, I intended for the model to train across patients rather than just individual series. The data is both multivariate and includes patient IDs as groups, which I assume also makes it multi-series. I am unsure about the exact approach to handle this. I was using TFT with pytorch-forecasting, which was a bit easier to work with, but it seems that it is no longer maintained. Any insights you could provide would be greatly appreciated.
|
Hi @flight505, I haven't checked your entire code (not really a minimum reproducible example :) ) but here are some things that are causing the large errors: Inverse transformation of E.g.:
Forecasts:
resulting in actual and forecast from completely different time steps (here we see a lot of pitfalls that can happen when working with time series data. If you use Darts, we handle this behind the hood for you). If Also for your metric to work properly (and fast!), try to implement it similar to any Darts metric (e.g. here, including decorators, and call to |
Hi I dont think this is a bug, but I cant trace the error.
It might be due to the data it self as it is multivariate and grouped by ID.
score is returning [nan, nan, nan, nan...] leading to trial = study.best_trial
the val_loss=120 so it is very high, but i think it should still return validation score.
It might be the model is not correctly using all data and simply a series?
TimeSeries is created with .from_dataframe not as .from_group_dataframe as it was creating a few errors.
The text was updated successfully, but these errors were encountered: