Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/multivariate wrapper #1917

Open
wants to merge 20 commits into
base: master
Choose a base branch
from

Conversation

JanFidor
Copy link
Contributor

Fixes #1845 .

Summary

I've added a wrapper which allows a LocalForecastingModel to act like a GlobalForecastingModel

Other Information

FutureCovariatesLocalForecastingModel support multivariate series, but I decided to treat them like the rest of LocalForecastingModels and train separate instances of the model on each of the components of the provided series

@codecov-commenter
Copy link

codecov-commenter commented Jul 25, 2023

Codecov Report

Patch coverage: 87.03% and project coverage change: -0.02% ⚠️

Comparison is base (137cf8c) 93.72% compared to head (ddc3fd3) 93.70%.

❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1917      +/-   ##
==========================================
- Coverage   93.72%   93.70%   -0.02%     
==========================================
  Files         131      132       +1     
  Lines       12662    12702      +40     
==========================================
+ Hits        11867    11902      +35     
- Misses        795      800       +5     
Files Changed Coverage Δ
...ecasting/multivariate_forecasting_model_wrapper.py 86.79% <86.79%> (ø)
darts/models/__init__.py 58.22% <100.00%> (+0.53%) ⬆️

... and 7 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@dennisbader
Copy link
Collaborator

Hi @JanFidor and thanks for this PR. #1845 is actually about a wrapper for multivariate series (a series with multiple target components / columns) rather than multiple series (multiple univariate or multivariate series in a list or sequence).

I don't think we will add a wrapper to make local models global, as this could potentially mean storing thousands of models. Also there would be a discrepancy between series used at training and prediction time.
The current global models train a single model on multiple series and accept any series at prediction time. For this wrapper, we don't know which trained model the series passed at prediction time should be mapped to.

@JanFidor
Copy link
Contributor Author

JanFidor commented Aug 2, 2023

Hi @dennisbader ! Thanks for the explanation, I can definitely see why a GlobalForecastingModel wrapper would be problematic. I made some fixes and the wrapper now operates on a single TimeSeries (univariate or multivariate) and changed the name accordingly

Copy link
Collaborator

@madtoinou madtoinou left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool addition, good work @JanFidor!

I noticed some left-over from the "multiple series support" implementation and EnsembleModel "template" :)

In addition of exposing this class to the users, we could maybe also include it in fit() logic as it's done for the RegressionModel in _fit_model() (model is wrapped in MultiOutputRegressor when necessary). Maybe with a warning since each sub-model will not be able to leverage all the components to generate their forecasts. WDYT @dennisbader ?

@JanFidor
Copy link
Contributor Author

Again, thanks for the review @madtoinou ! You caught me red-handed, I was certain I didn't leave anything from the original "template" ; ) . Btw. it seems like something happened to StatsForecastAutoETS, because it breaks CI/CD on a few unrelated PRs (this one and #1915 ) included.

@madtoinou
Copy link
Collaborator

@JanFidor The error in the CI/CD was caused by a statsforacasting release where they fixed a bug that the unittest was actually "relying on", everything should run smoothly now.

I'll try to review your PR by the end of the weekend, thank you for addressing my points.

Copy link
Collaborator

@madtoinou madtoinou left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @JanFidor,

A few additional comments about the docstring and simplification of the logic in fit() and predict().

darts/timeseries.py Outdated Show resolved Hide resolved
Comment on lines 75 to 79
predictions = [
model.predict(n=n, future_covariates=future_covariates)
if isinstance(model, FutureCovariatesLocalForecastingModel)
else model.predict(n=n)
for model in self._trained_models
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of the isinstance, rather use the supports_future_covariates.

Have a single call the predict with an inline if-else to pass the future_covariates (check suggestion in the fit()).

Also, it would be great to at least check that there is the same number of components in the forecasted series as models in the self_trained_models attribute.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, I tried the approach with if else for passing the covariates, but it was throwing errors for models which didn't support future covariates

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because they were trained without the future_covariates (and hence theoretically support them but not in this context, at prediction time?) or because they actually did not support them?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because they actually did not support them, some of the LocalForecastingModels don't support future covariates, so an error is thrown about future_covariates not being a parameter for fit() and predict()

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your tests are not matching the style of the others tests, please look how they are defined in test_local_forecasting_models.py (especially the parametrize decorator and the helper functions)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be fixed now (8c1b573)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of loops in the test_fit_predict_local_models() and test_fit_predict_local_future_covariates_models() what will make the test fail if one of the model fails, without indicating which one, it would be great to leverage pytest.mark.parametrize and replace the list of instantiated models with a list of kwargs (one for local, one for local that supports future covariates), that would then be used to create the model before fit/predict (as done here).

Also, testing models that support future covariates without actually passing future covariates in fit() would be great.

model, self.n_pred, combination, future_covariates
)
assert isinstance(preds, TimeSeries)
assert preds.n_components == combination.n_components
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice to check that local models trained manually, on each component of a series, does return the exact same forecasts as one in the wrapper.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is now being tested.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, would be great to also leverage the pytest.mark.parametrize decorator and make the input series (univariate/multivariate) one of the parameter (in addition of the model, see comment above). So that if the test fail, it will be clear which series was involved.

@felixdivo
Copy link
Contributor

Awesome that you takled this @JanFidor!

Do you plan to work on this further? I could, for example, help by adapting/extending the tests. I really want to soo this feature be completd since it makes my work quite a bit simpler. Do you want me to create a small PR to you branch JanFidor:feature/multivariate-wrapper for that?

@JanFidor
Copy link
Contributor Author

JanFidor commented Jan 9, 2024

Hi @felixdivo , thanks for the message! Long story short, this university semester had much more work than I anticipated so I went into survival mode for a bit hahaha. As soon as the semester ends (~3 weeks) I'm planning on going back to my old PRs and get them merged. If you feel up for writing some tests I'd be very thankful, but I'd still need to survive until the exams end to get the PR merged (I'm really sorry, but this month is especially terrible when it comes to university workload so there's no way I'll be able to get back to it any faster)

@felixdivo
Copy link
Contributor

Hey @JanFidor, I feel you; this can definitely happen. I'll open a small PR patching a few things (like the tests, as advised by @madtoinou ), and you can just have a look once you're back from survival in creative mode. :) Feel free to reach out for help then. Good luck!

@felixdivo
Copy link
Contributor

I'd maybe rename the model in the MultivariateForecastingModelWrapper to something like _template, since it is never actually used to make forecasts itself, but only as a blueprint to generate copies form.

JanFidor and others added 4 commits February 5, 2024 21:31
Improve testing code for MultivariateForecastingModelWrapper
…ate-wrapper

# Conflicts:
#	darts/timeseries.py
Co-authored-by: madtoinou <32447896+madtoinou@users.noreply.github.com>
Co-authored-by: madtoinou <32447896+madtoinou@users.noreply.github.com>
@felixdivo
Copy link
Contributor

@madtoinou are there unaddressed iossues in the PR? Maybe marking some comments above as resolved (if that is the case) helps maintain a better overview.

@madtoinou
Copy link
Collaborator

Closed the conversations that were addressed, the comments about pytest.mark.parametrize are still valid.

@felixdivo
Copy link
Contributor

@JanFidor can you have a look at them?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add a multivariate wrapper around univariate models
5 participants