Is Reliability Key For Humans To Trust Robots?

Trust in robotic technology is something I’ve touched upon a few times in the past year, and as the tech becomes more ubiquitous, it’s likely to be an ever more pressing concern.

A new study from the US Army furthers our exploration of this vital topic.  The research revolved around the so called transparency of autonomous agents in terms of their ability to convey their intent, and the impact that has on the trust humans have in the technology, and the success of their work alongside each other.  Of particular interest was how trust is impacted by any mistakes made by the autonomous agent.

The research saw participants witness a robot making a mistake, before conveying their perception of the robot under a range of conditions, including a scenario whereby the circumstances surrounding the mistake and the reasoning of the robot were explained.

“Understanding how the robot’s behavior influences their human teammates is crucial to the development of effective human-robot teams, as well as the design of interfaces and communication methods between team members,” the researchers explain. “This research contributes to the Army’s Multi-Domain Operations efforts to ensure overmatch in artificial intelligence-enabled capabilities. But it is also interdisciplinary, as its findings will inform the work of psychologists, roboticists, engineers, and system designers who are working toward facilitating better understanding between humans and autonomous agents in the effort to make autonomous teammates rather than simply tools.”

Trust in the machine

A simulated environment was developed to allow participants to observe an engagement between a human and a robot in a military exercise.  Each participant was asked to monitor the team and evaluate the performance of the robot.

Each team encountered various events during the exercise, and had to respond accordingly.  The human soldier responded perfectly to every eventuality, but sometimes the robot misunderstood the situation and made mistakes.

The robot would share various bits of information throughout the experiment to explain the rationale behind its actions, and what they thought the outcome would be.

It transpired that no amount of explanation would be enough to build trust among the onlookers that the robot could be relied upon.  The right process but a failed outcome was always viewed negatively, and the only way trust was sustained was if the robot performed as expected, and did so consistently.

“Earlier studies suggest that context matters in determining the usefulness of transparency information,” the authors conclude. “We need to better understand which tasks require more in-depth understanding of the agent’s reasoning, and how to discern what that depth would entail. Future research should explore ways to deliver transparency information based on the tasking requirements.”

Facebooktwitterredditpinterestlinkedinmail