Projects#

Active Projects#

Perceptions of AI Fairness

In collaboration with Malik Boykin’s lab in Cognitive, Linguistic, and Psychological Sciences at Brown, we are studying people’s preferred definition of fairness and what social and algorithmic factors influence these preferences. To power these tools, we are also developing techniques to interpolate between definitions of fairness.

Task Level Fairness

In this project, we examine how fairness can be evaluated at the task and problem level in order to develop heuristics for the feasibility of a fair model prior to training.

This project will produce a Python library that anyone can use in the EDA step of their project.

Model-Based Fairness Intervention Assessment

In this project, we are using bias models to evalute the effectiveness of different types of fair machine learning interventions. We hope to use the insights from this to provide data scientists with more actionable advice on how to select a fairness intervention.



## Past Projects

::::{grid} 2

:::{grid-item-card}
:img-top: _static/img/fair_dynamics.png


Dynamics of Fairness

^^^

(inactive, seeking a student)

How does fairness generalize, tolerate distribution shift, and propagate through interacting systems?
+++


[{far}`file-pdf`](https://dynamicdecisions.github.io/assets/pdfs/29.pdf)

:::



:::{grid-item-card}


:img-top: _static/img/fairness_forensics.png

Wiggum
^^^

Simpson's paradox inspired Fairness Forensics in collaboration with [the OU Data Lab](https://oudatalab.com/)

+++

[{fas}`house`  ](https://fairnessforensics.github.io/wiggum)

[{fab}`github` ](https://github.com/fairnessforensics/wiggum)

:::