Dynamics of Fairness
How does fairness generalize, tolerate distribution shift, and propagate through interacting systems?
In collaboration with Malik Boykin’s lab in Cognitive, Linguistic, and Psychological Sciences at Brown, we are studying people’s preferred definition of fairness and what social and algorithmic factors influence these preferences. To power these tools, we are also developing techniques to interpolate between definitions of fairness.
Task Level Fairness
In this project, we examine how fairness can be evaluated at the task and problem level in order to develop heuristics for the feasibility of a fair model prior to training.
Model-Based Fairness Intervention Assessment
In this project, we are using bias models to evalute the effectiveness of different types of fair machine learning interventions.
Simpson’s paradox inspired Fairness Forensics in collaboration with the OU Data Lab