Do We Have A Moral Right To Know How Algorithms Work?

Algorithms play an increasing part in our interactions with organizations, so it’s perhaps understandable that there is a growing desire for these algorithms to be transparent and explainable.  New research from Carnegie Mellon argues that customers have a moral right to know how algorithms that affect them operate.

“In most cases, companies do not offer any explanation about how they gain access to users’ profiles, from where they collect the data, and with whom they trade their data,” the researchers say. “It’s not just fairness that’s at stake; it’s also trust.”

Autonomous decisions

The authors highlight the rise in autonomous decision-making systems that rely heavily on user-generated data, and how many industry experts believe this requires greater transparency and accountability.  They cite the example of the EU’s GDPR regulation as one such measure.

While GDPR is a good start, however, it is far from clear whether individuals have a right to understand how algorithms work in making decisions that affect them.   The research contains a moral argument that the authors believe could form a foundation of any legal definition of such a right.

They believe that informed consent is no longer really feasible due to the ongoing nature of so many digital transactions.  A better approach would be to frame informed consent as an assurance of trust for any incomplete autonomous processes.

Informed consent

The authors argue that obtaining informed consent is an ethical issue, with this especially so when personal data is collected and processed.  Indeed, even informed consent is not enough when AI systems are making decisions in unpredictable and non-contextual ways.

Without an assurance of trust, organizations have a moral duty to explain how systems work, and to do so not just at the time the decision is made but also afterward too.  Only in this way can the explanation adequately address the functionality of the system and the rationale behind each specific decision.

This is a task that companies are increasingly tackling head on, with a growing number employing “data interpreters” to bridge the domains of data science with that of end users and customers who are ultimately affected by the algorithms.

“Will requiring an algorithm to be interpretable or explainable hinder businesses’ performance or lead to better results?” the researchers conclude. “That is something we’ll see play out in the near future, much like the transparency conflict of Apple and Facebook. But more importantly, the right to explanation is an ethical obligation apart from bottom-line impact.”

Facebooktwitterredditpinterestlinkedinmail