Are We More Likely To Trust AI-Generated Faces Than Real Ones?

In the play Not One of These People Martin Crimp showcases the progress that has been made with deepfake technology. The play features a series of AI-generated avatars who act as vehicles for Crimp’s narration.

While the avatars are impressive, each has telltale signs that they’re not real. Nonetheless, research from the University of London suggests that people might actually be more inclined to trust such faces than they would real ones.

People who don’t exist

The research explored how we perceive artificial faces, with volunteers shown a series of faces that were either real or AI-generated. They were asked to judge whether the faces they were shown were real or not. Interestingly, they consistently thought that artificial faces were more likely to be viewed as real than actual faces.

The results show how far the technology has come, with artificially generated faces now pretty realistic. While there are some benign applications of the technology, they can also be used for things such as espionage and political propaganda. The findings could have serious implications for our ability to judge the reliability of information.

This is especially so as the researchers also found that participants were more likely to judge information as accurate when it “came” from faces they judged to be real. Of course, those original judgements weren’t good, so they actually thought information from deepfakes was more reliable than from real people.

“Many have argued that one of biggest casualties to artificial intelligence will be the erosion of trust in what we see and hear,” the researchers explain. “As we show in our study, the realness that people project onto artificial faces makes them more likely to be trusted as informational sources, but later, when people realize there are AI images out there, their whole trust of any information given to them is drastically reduced.”

“This could lead to people disengaging with messages given to them in the future as they do not know who or what to trust. Educating audiences about such technologies and advancing their digital literacy may make us less gullible but at the same time it may make us, in general, less trusting.”

In a third study, the researchers then explored whether our awareness of deep fake technology influences our behavior. Half of the participants were informed about the existence of deepfake faces, with the other half kept in the dark.

The results showed that informed participants showed overall lower levels of trust in their virtual interaction with all faces presented to them, independently of whether these were real or not.

The subversive and ubiquitous use of technologies that can generate realistic-yet-fake photos and videos can erode trust in the main informational sources that we use, such as social media, with far-reaching consequences for our societies.

Facebooktwitterredditpinterestlinkedinmail