In late 2018, Amazon discontinued the use of their AI-based recruitment system because they found that it was biased against women. According to sources close to the matter, the tool gave low ratings to resumes with the terms “woman” or “women’s” in applications for technical roles, and went as far as downgrading applicants from two all-women’s colleges.
Is AI Bias a Corporate Social Responsibility Issue?
For businesses that want to build fair and accurate AI systems, the question most often posed is, how do we drive bias out of the training data? Corporate social responsibility (CSR) campaigns can provide an opportunity for businesses to create more fair and accurate training datasets by using CSR budgets to combat the underlying social issues that lead to the introduction of bias in the first place. These companies could hire critical public interest technologists — teams made up of computer scientists, sociologists, anthropologists, legal scholars, and activists — to develop strategies to develop more fair and accurate training data. These teams would be charged with conducting research that can help advise CSR groups on how to make strategic investments with groups working to reduce the expression of racism, sexism, ableism, homophobia, and xenophobia in our society. This would reduce the chances of these biases being encoded into the datasets used in machine learning, and would in turn produce more fair and accurate AI systems.