Wine quality multi-class prediction neural net model implemented using pytorch with model exploration and explanation using shap.
-
Updated
Mar 10, 2020 - Jupyter Notebook
Wine quality multi-class prediction neural net model implemented using pytorch with model exploration and explanation using shap.
This repo holds my attempt to explain fake news detection models.
Kaggle Machine Learning Courses Exercises
Slot Attention-based Classifier for Explainable Image Recognition
Comparison of sentiment analysis conducted with a lexicon and rule-based dictionary and state-of-the-art pre-trained language models
Fundamentals of Interpretable Data Science
GitHub repository for our work "Interpretable Machine Learning for Precision Aging"
Classifying Travel Mode choice in the Netherlands using KNN, XGBoost, RF and TabNet
Code, model and data for our paper: K. Tsigos, E. Apostolidis, S. Baxevanakis, S. Papadopoulos, V. Mezaris, "Towards Quantitative Evaluation of Explainable AI Methods for Deepfake Detection", Proc. ACM Int. Workshop on Multimedia AI against Disinformation (MAD’24) at the ACM Int. Conf. on Multimedia Retrieval (ICMR’24), Thailand, June 2024.
This repository includes a machine learning modeling study about estimating customers hotel cancellation and what are the reasons for these cancellations.
This repository contains code, information and datasets for the project on making interpretable models titled "Model Agnostic Methods for Interpretable Machine Learning". The abstract can be accessed at https://docs.google.com/document/d/1k2-beHD4YQxXpH8ExUM2Gd-yE5VqdluhiCsUIO3czRM/edit?usp=sharing
Machine Supported Labeling of Scientific Publications
Experiments of the bachlor's thesis "Quantitive Evaluation of the Expected Antagonism of Explainability and Privacy". Two explainers are tested against privacy attacks.
Final Year Project KCL
Improve Zorro exlanations for graph neural networks.
Through exploratory data analysis, predictive analytics and explainable AI, this project aims to provide valuable feedback regarding the reasons that customers churn, thus providing useful insight for the company to minimize customer churn.
EUCA is a practical prototyping tool to support the design and evaluation of explainable AI for non-technical end-users
In this project we trained personalized transformer models for news recommendation using adapters [similar to (IA)^3]. With layerwise relevancy propagation, we try to explain the recommendation to the user. Using a web interface and displaying word clouds, the user can be assigned to a “filter bubble”. This allows users to reflect on their behavior
Explainable AI for Image Classification
Add a description, image, and links to the explainable-ai topic page so that developers can more easily learn about it.
To associate your repository with the explainable-ai topic, visit your repo's landing page and select "manage topics."