-
Notifications
You must be signed in to change notification settings - Fork 595
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A bug when importing pytorch_forecasting #1479
Comments
For me it worked to downgrade to optuna version 3.4 and pytorch 2.0.1 |
same issue, optuna version 3.5, pytorch 2.2 and pytorch_forecasting==1.0.0 |
same issue with: |
I agree I have the same issue here with :
|
also see #1468 |
Expected behavior
I executed code
from pytorch_forecasting.models.temporal_fusion_transformer.tuning import optimize_hyperparameters
but have the following error:
TypeError Traceback (most recent call last)
Cell In[10], line 18
16 from pytorch_forecasting.data import GroupNormalizer
17 from pytorch_forecasting.metrics import MAE, MAPE,SMAPE, MAPE,PoissonLoss, QuantileLoss
---> 18 from pytorch_forecasting.models.temporal_fusion_transformer.tuning import optimize_hyperparameters
19 from pytorch_forecasting.data.examples import get_stallion_data
20 import datetime
File D:\Anaconda\envs\torch\lib\site-packages\pytorch_forecasting\models\temporal_fusion_transformer\tuning.py:29
25 optuna_logger = logging.getLogger("optuna")
28 # need to inherit from callback for this to work
---> 29 class PyTorchLightningPruningCallbackAdjusted(pl.Callback, PyTorchLightningPruningCallback):
30 pass
33 def optimize_hyperparameters(
34 train_dataloaders: DataLoader,
35 val_dataloaders: DataLoader,
(...)
52 **kwargs,
53 ) -> optuna.Study:
TypeError: Cannot create a consistent method resolution
order (MRO) for bases Callback, PyTorchLightningPruningCallback
del should be also included.
The text was updated successfully, but these errors were encountered: