Pytorch implementation of 'Explaining text classifiers with counterfactual representations'
-
Updated
Apr 30, 2024 - Python
Pytorch implementation of 'Explaining text classifiers with counterfactual representations'
An XGBoost model in Python that classifies if a customer will cancel his/her hotel booking or not. I also use counterfactuals guided by prototypes from the Alibi package to explore the minimum changes needed to flip a prediction from canceled to not canceled and vice versa.
[Autumn 2022] Specialization project leading up to main thesis in MSc Applied Physics and Mathematics at NTNU.
Experiments of the bachlor's thesis "Quantitive Evaluation of the Expected Antagonism of Explainability and Privacy". Two explainers are tested against privacy attacks.
Counterfactual Explanations, a project for CITS4404 - Artificial Intelligence and Adaptive Systems.
Survival-Patterns-based counterfactual explanations of survival models
multiplayer game and chat for collecting data on human counterfactual explanations in a collaborative learning task
Code for Master's thesis in Applied Physics and Mathematics at NTNU.
This is the official repository of the paper "RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model".
Repository for "Endogenous Macrodynamics in Algorithmic Recourse" (Altmeyer et al., 2023)
Creating a pipeline for generating semi-factual and counter-factual explanations for computer vision tasks.
CELS: Counterfactual Explanation for Time Series Data via Learned Saliency Maps (2023 Big data)
Motif-guided time series counterfactual explanations (ICPR 2022)
Counterfactual causal analysis
Code for paper "Consequence-aware Sequential Counterfactual Generation" (https://link.springer.com/chapter/10.1007/978-3-030-86520-7_42), @ ECML PKDD 2021 (). Repository maintained by Philip Naumann.
A package for Displaying and Computing Benchmarking Results of Algorithmic Recourse and Counterfactual Explanation Algorithms
Recourse Explanation Library in JAX
This repository is dedicated to PhD Research (Human-centered XAI)
Parsing a resume, predicting whether it will get screened for a given job description or not and generating counterfactual suggestions.
Counterfactual explanations for the identification of the features with the highest relevance on the shape of response curves generated by neural network black boxes
Add a description, image, and links to the counterfactual-explanations topic page so that developers can more easily learn about it.
To associate your repository with the counterfactual-explanations topic, visit your repo's landing page and select "manage topics."