Do People Trust Robots As Much As Humans?

Trust in robots is increasingly important as the number of them working alongside humans grows.  New research from the University of Montreal is the latest to explore how trust in robotic technology ebbs and flows among those working alongside them.

The researchers conducted a trust game experiment, whereby human volunteers were asked to bestow a $10 endowment to a partner, who was either a human, a robot, or a robot acting on behalf of a human.  It was in many ways a classic game theory setup, with the human volunteer knowing that gains were to be made, but trust would be key.  The robots in the experiment were programmed to mimic reciprocation behaviors from previous human players.

It’s common in these kind of games for decisions to quickly converge around outcomes that are mutually beneficial to both parties.  In this experiment, a key factor was the emotional reaction of people following their interactions with robots versus humans.

Building trust

The results suggest that people develop trust similarly in both humans and robots.  Traditionally, people would trust humans for both monetary gain and also to gain information about the other party, and a similar pattern emerged in the relations with the robots.

It emerged that the volunteers experienced distinct social emotions when engaging with the humans and the robots.  For instance, when a human failed to reciprocate, anger was triggered far more than when the robot failed to.  Likewise, however, feelings of gratitude were also far greater towards human partners.

There was also a distinct change in emotions when the robot partner acted on behalf of another human compared to acting on behalf of itself, suggesting a clear desire to see robots as servants rather than masters.

Equally, it was interesting to observe that trust was fairly consistent at the start of the experiment, even as social emotions differed, with the researchers suggesting that trust is likely to be malleable over time after repeated interactions.

This suggests that if reciprocation is inconsistent, it might be better if done by robots than humans, as this engagement elicits less emotion.  If reciprocation has to be consistent however, its better if humans engage with other humans.

Relied upon

A second study, from the US Army, explored the issue of trust through the lens of reliability. The research revolved around the so called transparency of autonomous agents in terms of their ability to convey their intent, and the impact that has on the trust humans have in the technology, and the success of their work alongside each other.  Of particular interest was how trust is impacted by any mistakes made by the autonomous agent.

The research saw participants witness a robot making a mistake, before conveying their perception of the robot under a range of conditions, including a scenario whereby the circumstances surrounding the mistake and the reasoning of the robot were explained.

“Understanding how the robot’s behavior influences their human teammates is crucial to the development of effective human-robot teams, as well as the design of interfaces and communication methods between team members,” the researchers explain. “This research contributes to the Army’s Multi-Domain Operations efforts to ensure overmatch in artificial intelligence-enabled capabilities. But it is also interdisciplinary, as its findings will inform the work of psychologists, roboticists, engineers, and system designers who are working toward facilitating better understanding between humans and autonomous agents in the effort to make autonomous teammates rather than simply tools.”

Facebooktwitterredditpinterestlinkedinmail