Open-source framework for uncertainty and deep learning models in PyTorch 🌱
-
Updated
May 29, 2024 - Python
Open-source framework for uncertainty and deep learning models in PyTorch 🌱
Privacy-Preserving Machine Learning (PPML) Tutorial
Birhanu Eshete is an Associate Professor of Computer Science at the University of Michigan, Dearborn. His main research focus is in trustworthy machine learning with emphasis on security, safety, privacy, interpretability, fairness, and the dynamics thereof. He also studies online cybercrime and advanced and persistent threats (APTs).
The open-sourced Python toolbox for backdoor attacks and defenses.
In the dynamic landscape of medical artificial intelligence, this study explores the vulnerabilities of the Pathology Language-Image Pretraining (PLIP) model, a Vision Language Foundation model, under targeted attacks like PGD adversarial attack.
Welcome to my Machine Learning repository, where you can find learning materials both from my studies and from various online courses.
[ICML 2024] TrustLLM: Trustworthiness in Large Language Models
Trustworthy AI/ML course by Professor Birhanu Eshete, University of Michigan, Dearborn.
Code for the paper "Approximating full conformal prediction at scale via influence functions""
Neural Network Verification Software Tool
Repository for the NeurIPS 2023 paper "Beyond Confidence: Reliable Models Should Also Consider Atypicality"
TRIAGE: Characterizing and auditing training data for improved regression (NeurIPS 2023)
My personal website.
DSPLab@UMich-Dearborn Website
This repo contains the codes, figures and datasets for the paper - U-Trustworthy Models. Reliability, Competence, and Confidence in Decision-Making.
Official implementation of NeurIPS 2023 paper "Trade-off Between Efficiency and Consistency for Removal-based Explanations" (https://arxiv.org/abs/2210.17426)
KDD 2023 tutorial "Trustworthy Transfer Learning: Transferability and Trustworthiness"
MERLIN is a global, model-agnostic, contrastive explainer for any tabular or text classifier. It provides contrastive explanations of how the behaviour of two machine learning models differs.
PyTorch package to train and audit ML models for Individual Fairness
Framework for Adversarial Malware Evaluation.
Add a description, image, and links to the trustworthy-machine-learning topic page so that developers can more easily learn about it.
To associate your repository with the trustworthy-machine-learning topic, visit your repo's landing page and select "manage topics."