Q and A with Brandon Amacher of Utah Valley University

In October, Utah Valley University conducted a study on how deepfakes impact viewers, whether viewers can identify deepfakes, and how viewers engage with deepfake content. 

The study used a combination of online and in-person participants viewing videos or listening to real and AI-created content. Overall, 240 subjects participated in the study, including forty subjects on-site. 

We had a chance to pose some questions to Brandon Amacher, the director of the Emerging Tech Policy Lab for the I3SC and an instructor at the UVU Center for National Security Studies, who was one of the leads on the study.

DDC: Tell us a little bit about the National Security program you run at UVU.

BA: Established in January 2016, the UVU Center for National Security Studies is one of the premier national security programs in the country. The CNSS employs a multi-disciplinary academic approach to examine both the theoretical and practical aspects of national security policy and practice, with areas of focus in intelligence, emerging technology, cybersecurity, and homeland security.

DDC: You along with some colleagues, and most importantly students, did a research study on how people respond to inauthentic content.  What was the impetus behind the research? 

BA: Several of us here at UVU including the Center for National Security Studies and the Gary R. Herbert Institute for Public Policy were deeply concerned about the potential impact of deepfake media on election security and public trust. We decided to take action and to leverage the expertise of UVU’s Neuromarketing SMARTLab, which has extensive experience conducting research on subjects’ non-conscious responses to digital content, in order to determine just how impactful deepfake media actually is. This combination of expertise and experience allowed us to design and execute a study that could effectively quantify the severity of the problem for policymakers. 

DDC: What were the research questions you hoped to answer? 

BA: We designed this study to address four key questions:

  1. Is there a measurable difference in the credibility of legitimate media versus deepfake media?

  2. Do participants exhibit different unconscious responses to real versus deepfake content?

  3. How accurately can subjects identify deepfake media after viewing or listening to it?

  4. Is there a difference in the ability to distinguish deepfakes in audio versus video content?

DDC: We know you developed a methodology that used both in person testing as well as online participants. Can you describe the approach?

BA:
A total of 244 subjects participated in the study, with 40 of them tested on-site to collect biometric data, including eye-tracking and facial coding. The participants were divided into four equal groups and exposed to either a video or audio sample.

At the beginning of the test, participants were unaware that some content was AI-generated. After viewing or listening to the media, participants evaluated the message and speaker on factors such as credibility, knowledge, and trustworthiness. Participants would rate the content they viewed on a Likert scale (1-7) with one being the least favorable rating, four being neutral, 7 being the most favorable. They were then given the opportunity to explain their rating in a short-answer response. Questions in this section of study were as follows: 

  1. What was your impression of the speaker? (Short Answer)

  2. How knowledgeable do you think the speaker is about the topic? (Likert Score & Short Answer)

  3. How trustworthy do you think the speaker is about the topic? (Likert Score & Short Answer)

  4. How persuasive do you find the speaker? (Likert Score & Short Answer)

  5. How reliable did you find the information in the sample? (Likert Score & Short Answer)

  6. How would you rate the overall quality of the content? (Likert Score & Short Answer)

  7. This content seemed authentic. (Likert Score & Short Answer)

Following this section, subjects were informed that the study aimed to measure the impact of deepfakes and that some content may have been AI-generated. Participants were then asked to assess whether they believed the media was real or AI-generated and to rate their confidence in their judgment. 

DDC: What were the top takeaways from the study?

BA:

  • Impact on Viewer: Deepfake and genuine media were rated by participants across several categories, including the speaker's knowledgeability, trustworthiness, persuasiveness, the reliability of the information, and the quality of the content. The average ratings across each the categories showed that deepfakes had effectively the same impact on viewers as real content. Ratings were based on a Likert scale, with no statistically significant differences observed between deepfake and real media. 

  • Difficulty Identifying Deepfakes in Retrospect: Even after being informed that they might have encountered a deepfake, participants struggled to consistently identify AI-generated content. Across all media types—real video, deepfake video, real audio, and deepfake audio—at least 50% of participants believed the media was "probably real." Furthermore, 57% or more were confident in their assessment, suggesting a roughly 50/50 chance of detecting a deepfake, with most people standing by their initial judgments.

  • Non-conscious Engagement with Deepfakes: Participants showed higher levels of engagement and confusion when exposed to deepfake content, as evidenced by micro-expressions, though they did not report these feelings during post-test interviews. This suggests that deepfakes may trigger a non-conscious response associated with the "uncanny valley" effect. In contrast, real media prompted more traditional emotional responses which were also expressed more strongly than emotions elicited by deepfakes

DDC:  We know that this research was just phase one, and you have a bigger vision of where this research could go, can you share some ideas of other research questions you have about inauthentic content that you hope to explore in the future?

BA: We are currently exploring options for follow-up studies which could tackle a range of issues including:

  • How could deepfake media affect down-ballot elections?

  • Are people more prone to being deceived by deepfake media if it reenforces their previously held beliefs?

  • How could deepfake content be utilized in cybercrime and information warfare? 

DDC: What’s something fun or fantastic about UVU that you think everyone should know?

UVU is intensely focused on providing engaged learning opportunities to students so that they can enter the workforce not only with an academic credential, but with high-impact experience. This project is a perfect example of diverse departments collaborating in order to afford students the opportunity to have an impact on a critical issue set. 

DDC: Where can people read more about the findings?

BA: https://www.uvu.edu/news/2024/10/ai-deepfake-2024-elections-discussion.html