In 1971, philosopher John Rawls proposed a thought experiment to understand the idea of fairness: the veil of ignorance. What if, he asked, we could erase our brains so we had no memory of who we were — our race, our income level, our profession, anything that may influence our opinion? Who would we protect, and who would we serve with our policies?
Auditing Algorithms for Bias
In 1971, philosopher John Rawls proposed a thought experiment to understand the idea of ‘fairness’: the veil of ignorance. What if, he asked, we could erase our brains so we had no memory of who we were – our race, our income level, our profession, anything that may influence our opinion? The veil of ignorance is a philosophical exercise for thinking about justice and society. But it can be applied to the burgeoning field of artificial intelligence (AI) as well. Inspired by this ideal of fairness, Accenture created a tool that measures disparate impact of algorithms and corrects them to achieve equal opportunity. The tool exposes potential disparate impact by investigating the data and model and leaves it to the user to balance fairness and accuracy.