Skip to content

Releases: skorch-dev/skorch

Version 1.0.0

27 May 15:25
dd341d3
Compare
Choose a tag to compare

The 1.0.0 release of skorch is here. We think that skorch is at a very stable point, which is why a 1.0.0 release is appropriate. There are no plans to add any breaking changes or major revisions in the future. Instead, our focus now is to keep skorch up-to-date with the latest versions of PyTorch and scikit-learn, and to fix any bugs that may arise.

Find the list of full changes here: v0.15.0...v1.0.0

Version 0.15.0

04 Sep 10:10
17c7675
Compare
Choose a tag to compare

This is a smaller release, but it still contains changes which will be interesting to some of you.

We added the possibility to store weights using safetensors. This can have several advantages, listed here. When calling net.save_params and net.load_params, just pass use_safetensors=True to use safetensors instead of pickle.

Moreover, there is a new argument on NeuralNet: You can now pass use_caching=False or True to disable or enable caching for all callbacks at once. This is useful if you have a lot of scoring callbacks and don't want to toggle caching on each individually.

Finally, we fixed a few issues related to using skorch with accelerate.

Thanks Zach Mueller (@muellerzr) for his first contribution to skorch.

Find the full list of changes here: v0.14.0...v0.15.0

Version 0.14.0

26 Jun 15:29
4c5cfda
Compare
Choose a tag to compare

This release offers a new interface for scikit-learn to do zero-shot and few-shot classification using open source large language models (Jump right into the example notebook).

skorch.llm.ZeroShotClassifier and skorch.llm.FewShotClassifier allow the user to do classification using open-source language models that are compatible with the huggingface generation interface. This allows you to do all sort of interesting things in your pipelines. From simply plugging a LLM into your classification pipeline to get preliminary results quickly, to using these classifiers to generate training data candidates for downstream models. This is a first draft of the interface, therefore it is not unlikely that the interface will change a bit in the future, so please, let us know about any potential issues you have.

Other items of this release are

  • the drop of Python 3.7 support - this version of Python has reached EOL and will not be supported anymore
  • the NeptuneLogger now logs the skorch version thanks to @AleksanderWWW
  • NeuralNetRegressor can now be fitted with 1-dimensional y, which is necessary in some specific circumstances (e.g. in conjunction with sklearn's BaggingRegressor, see #972); for this to work correctly, the output of the of the PyTorch module should also be 1-dimensional; the existing default, i.e. having y and y_pred be 2-dimensional, remains the recommended way of using NeuralNetRegressor

Full Changelog: v0.13.0...v0.14.0

Version 0.13.0

17 May 10:20
cc210fe
Compare
Choose a tag to compare

The new skorch release is here and it has some changes that will be exiting for some users.

  • First of all, you may have heard of the PyTorch 2.0 release, which includes the option to compile the PyTorch module for better runtime performance. This skorch release allows you to pass compile=True when initializing the net to enable compilation.
  • Support for training on multiple GPUs with the help of the accelerate package has been improved by fixing some bugs and providing a dedicated history class. Our documentation contains more information on what to consider when training on multiple GPUs.
  • If you have ever been frustrated with your neural net not training properly, you know how hard it can be to discover the underlying issue. Using the new SkorchDoctor class will simplify the diagnosis of underlying issues. Take a look at the accompanying notebook.

Apart from that, a few bugs have been fixed and the included notebooks have been updated to properly install requirements on Google Colab.

We are grateful for external contributors, many thanks to:

Find below the list of all changes since v0.12.1 below:

Added

  • Add support for compiled PyTorch modules using the torch.compile function, introduced in PyTorch 2.0 release, which can greatly improve performance on new GPU architectures; to use it, initialize your net with the compile=True argument, further compilation arguments can be specified using the dunder notation, e.g. compile__dynamic=True
  • Add a class DistributedHistory which should be used when training in a multi GPU setting (#955)
  • SkorchDoctor: A helper class that assists in understanding and debugging the neural net training, see this notebook (#912)
  • When using AccelerateMixin, it is now possible to prevent unwrapping of the modules by setting unwrap_after_train=True (#963)

Fixed

  • Fixed install command to work with recent changes in Google Colab (#928)
  • Fixed a couple of bugs related to using non-default modules and criteria (#927)
  • Fixed a bug when using AccelerateMixin in a multi-GPU setup (#947)
  • _get_param_names returns a list instead of a generator so that subsequent error messages return useful information instead of a generator repr string (#925)
  • Fixed a bug that caused modules to not be sufficiently unwrapped at the end of training when using AccelerateMixin, which could prevent them from being pickleable (#963)

Version 0.12.1

18 Nov 12:42
Compare
Choose a tag to compare

This is a small release which consists mostly of a couple of bug fixes. The standout feature here is the update of the NeptuneLogger, which makes it work with the latest Neptune client versions and adds many useful features, check it out. Big thanks to @twolodzko and colleagues for this update.

Here is the list of all changes:

  • Add Hugging Face integration tests #904
  • The entry for the HF badge was missing #905
  • Fix false warning if iterator_valid__shuffle=False #908
  • Update the Neptune integration by @twolodzko #906
  • DOC Update the documentation in several places #909
  • Don't fail when gpytorch import fails #913

Version 0.12.0

07 Oct 09:48
1596c51
Compare
Choose a tag to compare

We're pleased to announce a new skorch release, bringing new features that might interest you.

The main changes relate to better integration with the Hugging Face ecosystem:

But this is not all. We have added the possibility to load the best model parameters at the end of training when using the EarlyStopping callback. We also added the possibility to remove unneeded attributes from the net after training when it is intended to be only used for prediction by calling the trim_for_prediction method. Moreover, we now show how to use skorch with PyTorch Geometric in this notebook.

As always, this release was made possible by outside contributors. Many thanks to:

Find below the list of all changes:

Added

  • Added load_best attribute to EarlyStopping callback to automatically load module weights of the best result at the end of training
  • Added a method, trim_for_prediction, on the net classes, which trims the net from everything not required for using it for prediction; call this after fitting to reduce the size of the net
  • Added experimental support for huggingface accelerate; use the provided mixin class to add advanced training capabilities provided by the accelerate library to skorch
  • Add integration for Huggingface tokenizers; use skorch.hf.HuggingfaceTokenizer to train a Huggingface tokenizer on your custom data; use skorch.hf.HuggingfacePretrainedTokenizer to load a pre-trained Huggingface tokenizer
  • Added support for creating model checkpoints on Hugging Face Hub using HfHubStorage
  • Added a notebook that shows how to use skorch with PyTorch Geometric (#863)

Changed

  • The minimum required scikit-learn version has been bumped to 0.22.0
  • Initialize data loaders for training and validation dataset once per fit call instead of once per epoch (migration guide)
  • It is now possible to call np.asarray with SliceDatasets (#858)

Fixed

  • Fix a bug in SliceDataset that prevented it to be used with to_numpy (#858)
  • Fix a bug that occurred when loading a net that has device set to None (#876)
  • Fix a bug that in some cases could prevent loading a net that was trained with CUDA without CUDA
  • Enable skorch to work on M1/M2 Apple MacBooks (#884)

Version 0.11.0

31 Oct 15:54
baf0580
Compare
Choose a tag to compare

We are happy to announce the new skorch 0.11 release:

Two basic but very useful features have been added to our collection of callbacks. First, by setting load_best=True on the Checkpoint callback, the snapshot of the network with the best score will be loaded automatically when training ends. Second, we added a callback InputShapeSetter that automatically adjusts your input layer to have the size of your input data (useful e.g. when that size is not known beforehand).

When it comes to integrations, the MlflowLogger now allows to automatically log to MLflow. Thanks to a contributor, some regressions in net.history have been fixed and it even runs faster now.

On top of that, skorch now offers a new module, skorch.probabilistic. It contains new classes to work with Gaussian Processes using the familiar skorch API. This is made possible by the fantastic GPyTorch library, which skorch uses for this. So if you want to get started with Gaussian Processes in skorch, check out the documentation and this notebook. Since we're still learning, it's possible that we will change the API in the future, so please be aware of that.

Morever, we introduced some changes to make skorch more customizable. First of all, we changed the signature of some methods so that they no longer assume the dataset to always return exactly 2 values. This way, it's easier to work with custom datasets that return e.g. 3 values. Normal users should not notice any difference, but if you often create custom nets, take a look at the migration guide.

And finally, we made a change to how custom modules, criteria, and optimizers are handled. They are now "first class citizens" in skorch land, which means: If you add a second module to your custom net, it is treated exactly the same as the normal module. E.g., skorch takes care of moving it to CUDA if needed and of switching it to train or eval mode. This way, customizing your networks architectures with skorch is easier than ever. Check the docs for more details.

Since these are some big changes, it's possible that you encounter issues. If that's the case, please check our issue page or create a new one.

As always, this release was made possible by outside contributors. Many thanks to:

  • Autumnii
  • Cebtenzzre
  • Charles Cabergs
  • Immanuel Bayer
  • Jake Gardner
  • Matthias Pfenninger
  • Prabhat Kumar Sahu

Find below the list of all changes:

Added

  • Added load_best attribute to Checkpoint callback to automatically load state of the best result at the end of training
  • Added a get_all_learnable_params method to retrieve the named parameters of all PyTorch modules defined on the net, including of criteria if applicable
  • Added MlflowLogger callback for logging to Mlflow (#769)
  • Added InputShapeSetter callback for automatically setting the input dimension of the PyTorch module
  • Added a new module to support Gaussian Processes through GPyTorch. To learn more about it, read the GP documentation or take a look at the GP notebook. This feature is experimental, i.e. the API could be changed in the future in a backwards incompatible way (#782)

Changed

  • Changed the signature of validation_step, train_step_single, train_step, evaluation_step, on_batch_begin, and on_batch_end such that instead of receiving X and y, they receive the whole batch; this makes it easier to deal with datasets that don't strictly return an (X, y) tuple, which is true for quite a few PyTorch datasets; please refer to the migration guide if you encounter problems (#699)
  • Checking of arguments to NeuralNet is now during .initialize(), not during __init__, to avoid raising false positives for yet unknown module or optimizer attributes
  • Modules, criteria, and optimizers that are added to a net by the user are now first class: skorch takes care of setting train/eval mode, moving to the indicated device, and updating all learnable parameters during training (check the docs for more details, #751)
  • CVSplit is renamed to ValidSplit to avoid confusion (#752)

Fixed

  • Fixed a few bugs in the net.history implementation (#776)
  • Fixed a bug in TrainEndCheckpoint that prevented it from being unpickled (#773)

Version 0.10.0

23 Mar 15:34
Compare
Choose a tag to compare

This one is a smaller release, but we have some bigger additions waiting for the next one.

First we added support for Sacred to help you better organize your experiments. The CLI helper now also works with non-skorch estimators, as long as they are sklearn compatible. Some issues related to learning rate scheduling have been solved.

A big topic this time was also working on performance. First of all, we added a performance section to the docs. Furthermore, we facilitated switching off callbacks completely if performance is absolutely critical. Finally, we improved the speed of some internals (history logging). In sum, that means that skorch should be much faster for small network architectures.

We are grateful to the contributors, new and recurring:

  • Fariz Rahman
  • Han Bao
  • Scott Sievert
  • supetronix
  • Timo Kaufmann

Version 0.9.0

30 Aug 10:46
92ae54b
Compare
Choose a tag to compare

This release of skorch contains a few minor improvements and some nice additions. As always, we fixed a few bugs and improved the documentation. Our learning rate scheduler now optionally logs learning rate changes to the history; moreover, it now allows the user to choose whether an update step should be made after each batch or each epoch.

If you always longed for a metric that would just use whatever is defined by your criterion, look no further than loss_scoring. Also, skorch now allows you to easily change the kind of nonlinearity to apply to the module's output when predict and predict_proba are called, by passing the predict_nonlinearity argument.

Besides these changes, we improved the customization potential of skorch. First of all, the criterion is now set to train or valid, depending on the phase -- this is useful if the criterion should act differently during training and validation. Next we made it easier to add custom modules, optimizers, and criteria to your neural net; this should facilitate implementing architectures like GANs. Consult the docs for more on this. Conveniently, net.save_params can now persist arbitrary attributes, including those custom modules.
As always, these improvements wouldn't have been possible without the community. Please keep asking questions, raising issues, and proposing new features. We are especially grateful to those community members, old and new, who contributed via PRs:

Aaron Berk
guybuk
kqf
Michał Słapek
Scott Sievert
Yann Dubois
Zhao Meng

Here is the full list of all changes:

Added

  • Added the event_name argument for LRScheduler for optional recording of LR changes inside net.history. NOTE: Supported only in Pytorch>=1.4
  • Make it easier to add custom modules or optimizers to a neural net class by automatically registering them where necessary and by making them available to set_params
  • Added the step_every argument for LRScheduler to set whether the scheduler step should be taken on every epoch or on every batch.
  • Added the scoring module with loss_scoring function, which computes the net's loss (using get_loss) on provided input data.
  • Added a parameter predict_nonlinearity to NeuralNet which allows users to control the nonlinearity to be applied to the module output when calling predict and predict_proba (#637, #661)
  • Added the possibility to save the criterion with save_params and with checkpoint callbacks
  • Added the possibility to save custom modules with save_params and with checkpoint callbacks

Changed

  • Removed support for schedulers with a batch_step() method in LRScheduler.
  • Raise FutureWarning in CVSplit when random_state is not used. Will raise an exception in a future (#620)
  • The behavior of method net.get_params changed to make it more consistent with sklearn: it will no longer return "learned" attributes like module_; therefore, functions like sklearn.base.clone, when called with a fitted net, will no longer return a fitted net but instead an uninitialized net; if you want a copy of a fitted net, use copy.deepcopy instead;net.get_params is used under the hood by many sklearn functions and classes, such as GridSearchCV, whose behavior may thus be affected by the change. (#521, #527)
  • Raise FutureWarning when using CyclicLR scheduler, because the default behavior has changed from taking a step every batch to taking a step every epoch. (#626)
  • Set train/validation on criterion if it's a PyTorch module (#621)
  • Don't pass y=None to NeuralNet.train_split to enable the direct use of split functions without positional y in their signatures. This is useful when working with unsupervised data (#605).
  • to_numpy is now able to unpack dicts and lists/tuples (#657, #658)
  • When using CrossEntropyLoss, softmax is now automatically applied to the output when calling predict or predict_proba

Fixed

  • Fixed a bug where CyclicLR scheduler would update during both training and validation rather than just during training.
  • Fixed a bug introduced by moving the optimizer.zero_grad() call outside of the train step function, making it incompatible with LBFGS and other optimizers that call the train step several times per batch (#636)
  • Fixed pickling of the ProgressBar callback (#656)

Version 0.8.0

12 Apr 16:37
7a84568
Compare
Choose a tag to compare

This release contains improvements on the callback side of things. Thanks to new contributors, skorch now integrates with neptune through NeptuneLogger and Weights & Biases through WandbLogger. We also added PassthroughScoring, which automatically creates epoch level scores based on computed batch level scores.

If you want skorch not to meddle with moving modules and data to certain devices, you can now pass device=None and thus have full control. And if you would like to pass pandas DataFrames as input data but were unhappy with how skorch currently handles them, take a look at DataFrameTransformer. Moreover, we cleaned up duplicate code in the fit loop, which should make it easier for users to make their own changes to it. Finally, we improved skorch compatibility with sklearn 0.22 and added minor performance improvements.

As always, we're very thankful for everyone who opened issues and asked questions on diverse channels; all forms of feedback and questions are welcome. We're also very grateful for all contributors, some old but many new:

Alexander Kolb
Benjamin Ajayi-Obe
Boris Dayma
Jakub Czakon
Riccardo Di Maio
Thomas Fan
Yann Dubois

Here is a list of all the changes and their corresponding ticket numbers in detail:

Added

  • Added NeptuneLogger callback for logging experiment metadata to neptune.ai (#586)
  • Add DataFrameTransformer, an sklearn compatible transformer that helps working with pandas DataFrames by transforming the DataFrame into a representation that works well with neural networks (#507)
  • Added WandbLogger callback for logging to Weights & Biases (#607)
  • Added None option to device which leaves the device(s) unmodified (#600)
  • Add PassthroughScoring, a scoring callback that just calculates the average score of a metric determined at batch level and then writes it to the epoch level (#595)

Changed

  • When using caching in scoring callbacks, no longer uselessly iterate over the data; this can save time if iteration is slow (#552, #557)
  • Cleaned up duplicate code in the fit_loop (#564)

Fixed

  • Make skorch compatible with sklearn 0.22 (#571, #573, #575)
  • Fixed a bug that could occur when a new "settable" (via set_params) attribute was added to NeuralNet whose name starts the same as an existing attribute's name (#590)