We’ve seen no shortage of scandals when it comes to AI. In 2016, Microsoft Tay, an AI bot built to learn in real time from social media content turned into a misogynist, racist troll within 24 hours of launch. A ProPublica report claimed that an algorithm — built by a private contractor — was more likely to rate black parole candidates as higher risk. A landmark U.S. government study reported that more than 200 facial recognition algorithms — comprising a majority in the industry — had a harder time distinguishing non-white faces. The bias in our human-built AI likely owes something to the lack of diversity in the humans who built them. After all, if none of the researchers building facial recognition systems are people of color, ensuring that non-white faces are properly distinguished may be a far lower priority.