-
Considering Process in Algorithmic Bias Detection and Mitigation
-
Potential for AI to help
- Biased-based on names
- Inconsistent-
- Slow-
-
algorithms can also bring about bias and inconsistency
- biased against black defendants to be held in jail against white defendants
- need to be careful with these
-
algorithmic fairness
-
fairness performance tradeoffs
- accuracy and fairness
- more fair, often less accurate
- doesn't have to be this way
-
investigates the entire process - her way
-
real-world interventions, tech policy
-
TODAY'S TALK
- Leave-one-out unfairness
- IRS collaboration
- Bias in recivid....
-
Leave-one-out unfairness
- inconsistent
- model changes based on different person being a part of the training set
- this isn't ideal
- the chance that a person's outcome can change on a 1-point change in dataset
- it is unfair, feels arbitrary
-
learning rule instability as (procedural) unfairness
-
definition: leave-one-out unfairness



- focuses on consistency
- connections to differential privacy
- how much of a problem is LUF IRL?
- very prevalent in deep models
- different graphs shown: german credit, seizure, LFW
-
we need a more formal understanding of inconsistency
-
how to predict with selective ensembles
-
consistency and explanations
- inconsistency limits individuals's ability to improve their prediction outcomes
- created method for providing stable
-
mitigation techniques: ongoing work
- revenue and no change rate