How People Develop Trust In AI

AI has developed a growing list of capabilities in recent years, but lingering doubts exist about its trustworthiness, whether due to biases in the data it’s fed, how that data is governed, or the impact the technology will have on the workforce.

New research from Carnegie Mellon University explores how people develop trust in AI, with the study spanning two decades of prior research on the topic.  The analysis suggests that the key is ensuring that the representation of AI is crucial to our trust in it, and they provide a framework for addressing the various factors that go into building emotional and cognitive trust in the technology.

“The trust that users develop in AI will be central to determining its role in organizations,” the researchers explain. “We addressed the dynamic nature of trust by exploring how trust develops for people interacting with different representations of AI (e.g., robots, virtual agents, or embedded) as well as the features of AI that facilitate the development of trust.”

Tangibility, transparency and reliability

The researchers were specifically keen to explore the role of tangibility (ie the capability of the technology), transparency (ie the amount of insight users have about the rules and logic of the technology), and reliability (ie whether the technology produces consistent outcomes) in public trust.

The researchers also explored things such as the various characteristics of the task, the immediacy of the behaviors, and even the anthropomorphism of the technology.

From this, a search was conducted of Google Scholar articles on human trust in AI from 1999 until 2019, with approximately 200 articles and conference proceedings found.  An additional 50 articles were then found in other databases, and after filtering down their findings, they eventually reviewed 150 articles on the topic.

The analysis found that how AI is represented plays a key role in the trust people eventually have in the technology.  For instance, they discovered that the way we build trust with robotic AI is broadly comparable with our trust mechanisms with humans.  Just as with other people, we tend to start the relationship with low trust and gradually progress upwards.  Interestingly, however, for virtual and embedded AI, the reverse was the case, with high trust at the beginning, which declined with experience.

Development of trust

The level of machine intelligence displayed by the technology also helped, with high intelligence able to expedite the development of trust in robotic systems, and maintain it in virtual systems.

Transparency was also important, as it enabled people to understand the rudiments of how the system was working.  This was key in both robotic and virtual systems, but especially so in virtual systems whose workings are less physically obvious.

Anthropomorphism also had a variable impact on our ability to build trust with AI systems.  In virtual AI, anthropomorphism was broadly positive, but for robotic AI, the picture was far less clear.  There was a general preference for anthropomorphic robots, but this could also veer into the uncanny valley, and make people feel uncomfortable.

“Trust can predict the level of reliance on technology, while the level of correspondence between someone’s trust and the capabilities of the technology, known as calibration, can influence how the technology is used,” the researchers conclude.

Facebooktwitterredditpinterestlinkedinmail