Replies: 3 comments 7 replies
-
PyCon 2021 Talk ProposalTitleAssessing and Mitigating Unfairness in AI Systems Who and Why?This talk is intended primarily for data scientists and machine learning researchers who want to incorporate unfairness mitigation best practices and algorithms into their workflow. We expect audience to have a basic understanding of supervised ML algorithms and data science processes. After watching this talk, I hope audience members have a better understanding of fairness-related harms and how to approach the sociotechnical challenges of fairness in applied context. I provide a qualitative framework for assessing harms that audience members can use to assess their own projects for unfairness, and showcase various techniques in the Fairlearn library that audience members can install and use in their workflow. DescriptionWhile fairness has become an increasingly popular topic in machine learning and data science, many data scientist struggle with how to incorporate fairness assessments and unfairness techniques into their work. We will cover formal frameworks for assessing ML systems for fairness-related harms and how to apply different algorithmic techniques for mitigating unfairness in trained models. Outline
|
Beta Was this translation helpful? Give feedback.
-
SciPy 2021 Tutorial ProposalML Fairness in the Wild : From Social Context to Practice using Microsoft's FairlearnShort DescriptionML Fairness is an area of growing research and practice, one that all AI developers and practitioners should be familiar with. Long DescriptionML Fairness is an area of growing research and practice, one that all AI developers and practitioners should be familiar with. Below is an outline of the entire tutorial, including concepts and hands on aspects that will be covered. ML Fairness in the Responsible AI Pipeline (30 minutes)Before diving into the mathematical and code-based approaches to Fair ML it is critical that all practitioners understand Data is Never NeutralCONCEPT : Data is a product of the historic and social context in which is it measured, and how it is measured. This section will go 5 Types of HarmsCONCEPT : Unfair AI systems produce 5 types of harms covered here with real examples. Resource Allocation, Quality-of-Service, Key Terms in ML Fairness and Use CareCONCEPT: Sensitive vs. Advantaged groups, positive vs negative outcomes CONCEPT : We will discuss the basic premise of the use case - bias in medical algorithms - and the various consequences of Uncovering and Documenting Data Bias: 1 hrCONCEPT : Types of social biases in data, such as proxy variables, tainted features and disparate impact. HANDS ON : Look for biases in the data using the example notebook and discuss. CONCEPT : Datasheets for Datasets HANDS ON : Build a datasheet for the dataset at hand. Break: 10 minDetecting Model Bias (1 Hr)CONCEPT : Types of ML Fairness and how to understand which is right for the use case. What are the various tradeoffs and CONCEPT : Introduce the Fairlearn Package HANDS ON : Build a predictive model of choice, and using the Fairlearn package assess how fairly the model performs for Break: 10 minMitigate Unfair Models (45 minutes)CONCEPT : Two ways to mitgate model bias - what that means mathematically and how it influence impacts socially. HANDS ON : Mitigating Model by optimizing for different fairness goals using Fairlearn. CONCEPT : Model Cards HANDS ON : Build a model card for your mitigated model. Discussion and Conclusion (15 minutes)Re-affirm that ML Fairness does not happen in a vacuum and that is a part of a much larger conversation of responsible |
Beta Was this translation helpful? Give feedback.
-
PyData GlobalKindWorkshop TitleAssessing and Mitigating Unfairness in AI Systems Brief SummaryThis workshop aims to help data science practitioners navigate the sociotechnical challenges of AI fairness. In the first half of the workshop, we walk participants through a Jupyter notebook showing how Fairlearn can be used to assess and mitigate unfairness in ML models. In the second half, a panel of speakers will discuss best practices for improving fairness of real-world AI systems. Brief Outline
DescriptionFairness in AI systems is an interdisciplinary field of research and practice that aims to understand the negative impacts of AI, with an emphasis on improving and supporting historically marginalized and underserved communities. In this workshop, we first walk participants through an hour-long tutorial on assessing and mitigating fairness-related harms in the context of an U.S. healthcare scenario. Participants will learn how to use the Fairlearn library to assess machine learning models for performance disparities across different racial groups. In the second part of this workshop, we invite researchers and industry practitioners to speak on a panel about sociotechnical challenges data scientists face when applying fairness methodology to their work. The panel will be moderated by Miro Dudik, Senior Principal Researcher at Microsoft Research. The panelists will first introduce themselves, and then answer some moderator-provided questions about documenting responsible AI practices, discussing fairness within your team and organization, and incorporating fairness in the design and evaluation of AI systems. |
Beta Was this translation helpful? Give feedback.
-
This year, we want to promote FairLearn more through talks and tutorials at developer-oriented conferences. The goal of this discussion thread is to crowd-source a list of conferences (and associated deadlines for CfPs) that we want to target. Some upcoming conferences we should consider:
Beta Was this translation helpful? Give feedback.
All reactions