Skip to content

Contains benchmarking and interpretability experiments on the Adult dataset using several libraries

Notifications You must be signed in to change notification settings

sayakpaul/Benchmarking-and-MLI-experiments-on-the-Adult-dataset

Repository files navigation

The initial experiments were a part of an assignment given from TCS ILP Innovations' Lab. Later as my appetite for the wonderful field of machine learning increased, I decided to give it another try and try out the new libraries.

It includes benchmarking and interpretability experiments on the Adult Data set using libraries like fastai, h2o and interpret. Along with these, I have shown how one can use the interpret library to construct explanations for sklearn models. Note that keras models can be converted to sklearn variants and this enables interpret to work equally on these models as well.

I show you how easy it is to interpret a blackbox machine learning model with interpret. I think the library really stands its name. Along with this, I also show how to use Decision Tree Surrogate to explain models in h2o.

To do: Annotate the notebooks in plain English and include short explanations to the various interpretability methods used.

About

Contains benchmarking and interpretability experiments on the Adult dataset using several libraries

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published