Project - Simulation of Long-term Impact of AI on Fairness
Machine learning systems have been deployed to aid in high-impact decision-making, such as determining criminal sentencing, child welfare assessments, who receive medical attention, loan approval, etc., and many other settings. Understanding whether such systems are fair is crucial and requires an understanding of models' short- and long-term effects. There are a good number of fairness algorithms and toolkits to develop unbiased AI models that exist (e.g., AIF 360). There are ways to robustify these models to perturbations in population distributions. However, fairness is guaranteed at the initial stage only, and little has been studied on the recursive use of these models. Also, the AI interventions in use would affect the population that in turn would affect the data collected for training the future AI models.
There hasn't been much exploration in simulating and understanding the long term behavior or stability of the model and enable a what-if-modeler. Typically there are multiple players/enterprises in the ecosystem, all with different AI models interacting with the same population, and the effects of enterprises using AI with varying degrees of fairness are not understood. Consequently, the disparities in payoffs/profits to these enterprises are not well understood which guides us to ask the question "Would the fairer enterprises survive?" or what policies or governance has to be in place to ensure fairness in a multi-enterprise setting.
The purpose of this challenge is to build a simulation framework that would enable us to discover the evolution of fairness in a setting of multi-enterprises interacting with a diverse population.