Are We More Likely To Trust Machines With Private Information Than Humans?

Recently concern has been aired around just what kind of information devices such as Amazon’s Alexa is consuming as it sits quietly in our homes.  Whilst various people have voiced concern about such a possibility, recent research from Penn State suggests that consumers are pretty relaxed by it all.

The research finds that people are generally more willing to trust machines with private information than we are human beings.  The authors believe that the ‘machine heuristic’ imbues people with greater trust in technology than humans, as people are capable of dishonesty and fraud.

“This tendency to trust the machine agent more than the human agent was much stronger for people who were high on the belief in the machine heuristic,” the authors say. “For people who did not believe in the machine heuristic, it didn’t make a difference whether the travel agent was a machine or a human.”

They go on to say that the very presence of an artificial technology was enough to seemingly trigger this machine heuristic, and therefore alter the behavior of those people exposed to the situation.

Faith in machines

They hypothesize that people may have greater faith in machines because they believe they behave ethically, don’t gossip or lie, and will act lawfully with the information provided to them.  Whilst in many cases this may be true, it remains possible for even the most well intentioned machines to act in ways their designers did not envisage, and of course this doesn’t take account of machines that were designed specifically to do ill to humans, such as to extract financial information or passwords from us.

“This study should serve as a warning for people to be aware of how they interact online,” the authors explain. “People should be aware that they may have a blind belief in machine superiority. They should watch themselves when they engage online with robotic interfaces.”

Of course, such inherent trust needn’t be used for nefarious ends, and the authors highlight how it could help developers create more user-friendly applications.  For instance, making it crystal clear in the design of the app that no human is in the loop may actually make some people trust the service more.  Of course, quite how developers would distinguish those users with the machine heuristic is anyone’s guess, but it does present food for thought, especially in areas where humans have been shown to exhibit unpredictable or undesirable behaviors.

What’s more, the authors contend that these design cues only need to be subtle to have the desired effect.

“In all of this, one thing I would like to stress is that the designers have to be ethical,” they explain. “They should not unethically try to extract information from unsuspecting consumers.”

It’s certainly an interesting topic, and the researchers plan to further explore it by studying what role the machine heuristic plays in how much we trust everything from chatbots to smart speakers.  It joins a parallel study the team conducted into whether it influences our trust in AI-based medical technology.

Facebooktwitterredditpinterestlinkedinmail