Study: Can voters be tricked by deepfake content generated with AI?

[Participants in a study by student researchers at Utah Valley University (UVU) couldn’t distinguish real and deepfake video of political actors, as reported in this story from The Salt Lake Tribune. More details about the study are provided in an excerpt from coverage in Deseret News that follows below, including the fact that the deepfake video “was perceived as more knowledgeable, as more trustworthy, as more persuasive, and of better content quality.” For much more information including images and videos, see the UVU website. –Matthew]

[Image: Hope Fager speaks as Utah Valley University students present a deepfake study, in Orem on Monday, October 28, 2024. Credit: Trent Nelson | The Salt Lake Tribune]

Can voters be tricked by deepfake content generated with AI?

Utah Valley University student researchers found half of the viewers couldn’t tell the difference between fake and real, and some thought AI content was more trustworthy.

By Robert Gehrke
October 29, 2024; Updated: October. 31, 2024

Elections officials in Utah and nationwide are facing an onslaught of disinformation from people trying to discredit democratic institutions, Lt. Gov. Deidre Henderson said Monday, and it requires that voters be “vigilant” and trust the systems in place.

Henderson’s comments came as students at Utah Valley University presented the results of their research on people’s ability to recognize a deepfake video generated with artificial intelligence.

Half of those students surveyed in their research could not tell a deepfake from an actual human, and, in many instances, the deepfake was considered more knowledgeable, trustworthy and persuasive than a human speaker.

Student researchers using eye-tracking software and measuring physical reactions also found that subjects were more engaged with deepfakes than with videos of an actual person and they also experienced more confusion.

Henderson said that sometimes, amid the “disinformation,” there are legitimate concerns raised “but sometimes those questions and concerns are raised because people are trying deliberately to undermine our confidence and faith and trust in our democratic process.”

“The tools may be different, the methods may be different, but the attempts to dissuade, the attempts to undermine, the attempts to trick people, that’s nothing new,” she said, “but it is something that we have to be continually vigilant and guard against.”

Henderson, as lieutenant governor, is the director of state elections. She also is running for reelection this year with Gov. Spencer Cox. In June, a deepfake circulated of Cox purportedly admitting to election improprieties.

The Utah Valley University students’ project was the first phase in testing how convincing AI-generated images can be. Brandon Amacher, director of Emerging Tech Lab and an instructor of national security at UVU who oversaw the students’ work, said the goal was to quantify the problem.

Next, the students plan to create a political campaign with an AI-generated candidate to specifically test its application in politics.

Russia, Iran, China and North Korea have all used deepfakes to promote their interests this election, said Michael Kaiser, president and CEO of Defending Digital Campaigns, a nonpartisan nonprofit that provides cybersecurity resources to political campaigns. But the aim is not necessarily to sway voters one way or the other.

“The goal of the people creating deepfakes is not just about the way you vote, it’s to make our democracy not work. That is their goal,” he said. “In some ways, they could care less who you vote for as long as you believe the system doesn’t work, because, for our adversaries, that’s winning.”

Amacher said the spread of deepfakes can “contribute to the rot we have in the trust in our information landscape, and we get to a point where the termites eat out the foundation and we don’t know if we need to condemn the building anymore.”

Kaiser and other experts on the panel said when people see a video online that seems sensational or generates an emotional response, they should pause, try to check the source of the original posts and question the intent of the poster before spreading it further.

[From Deseret News]

‘21st century problem’: Study reveals public struggles to distinguish between real and deepfake media

AI-generated content could realistically mimic identities with at least 50% accuracy, research team shares

By Emma Pitts
October 28, 2024

[Snip]

“The first step of addressing any problem is understanding the problem,” Hope Fager, a student at UVU studying national security studies and computer science and the strategic research team lead in the Emerging Tech Policy Lab in the Center for National Security Studies, said in the student presentation Monday.

“Therefore, the overall goal of this study is to give policymakers and campaigns the information they need in order to combat this issue effectively,” she added. “In this study, we simulated as best as possible the natural circumstances of scrolling through social media and seeing videos, and then we looked at how those videos affected people that saw them.”

Fager said it only took her the weekend, by herself, to create a deepfake using a free online AI generator that took two real videos — one of her and one of her team members, Leah — swapped their faces and “replaced the audio of the video with an AI-generated audio of Leah’s voice.”

Three questions were laid out before the study:

  1. Is there a difference in how credible real media is compared to deepfake media?
  2. How well can people distinguish deepfake media from authentic media?
  3. Is there a difference in people’s subconscious reactions to real versus deepfake media?

A total of 244 participants were included in the study nationwide, 40 of whom participated in person at the Vivint SMARTLab, where biometric technologies were used to analyze nonconscious responses.

Mauricio Cornejo Nava, a business and analysis student at UVU and customer experience researcher for the UVU SMARTLab, said participants were divided into four groups — real video, fake video, real audio and fake audio — to eliminate as much bias as possible.

“After the participants were exposed to the media content, we asked them a series of questions to gather their thoughts and impressions,” Nava said. “We used eye tracking, which tracks where the participants were looking when they were watching the content, and facial expression analysis, which can read over 3,000 micro-expressions and translate that into several emotions.”

“What we found is that the deepfake video performed better in four out of the six metrics than the real video. It was perceived as more knowledgeable, as more trustworthy, as more persuasive, and of better content quality,” Nava added.

Once participants were informed of the study’s true nature and that there was a chance they had fallen victim to a deepfake, not all were confident in which one they had been exposed to.

Nava said that “50% of the people that saw the deepfake content were confident in their response. For the real audio, 60% were confident. And for the real video, 70% were confident. So even though they saw the real content, they were still not confident at all.”

Most participants admitted that it didn’t cross their minds that the content they viewed would be AI-generated, and others were concerned that AI-generated material has become so realistic that it’s not identifiable.

“Once people have made an assessment, they’re generally going to stick to their guns, and they’re going to assume that they’re right, even if they’re wrong,” Fager added.

“This means that, with a good deepfake, a stranger could fraudulently become law enforcement, a field expert, a personal friend or a politician; based on our research, someone could adopt their identity, authority and expertise with at least a 50% accuracy rate.”


Comments


Leave a Reply

Your email address will not be published. Required fields are marked *

ISPR Presence News

Search ISPR Presence News:



Archives