A “deepfake” video purportedly depicting Ukrainian President Volodymyr Zelenskyy calling on his forces to surrender to the Russian army circulated online Wednesday – including on Facebook, which later removed the misinformation.
University of Virginia professors Danielle Citron, a Law School expert on deepfakes and digital privacy, and David Nemer, an assistant professor of media studies who researches online misinformation, talked to UVA Today about the future of malign actors using digital tools to create false impressions for their own ends, including the ill omen for democracies and their security.
Q. Is this the pivotal moment experts have been fearing for deepfakes?
Citron: Alas, yes. When Bobby Chesney [a professor at University of Texas, Austin] and I started writing about deepfakes in early 2018, the national security implications were largely hypothetical. Then, and now, the problem was very real in the case of deepfake sex videos, where mostly women’s faces were inserted into porn in a fairly realistic way. Chesney and I warned about the national security implications, but then we only saw glimmers of a well-timed, destructive fake. Now, we see our warnings coming to fruition. It is a bummer to say, “I told you so.”
Nemer: The recent video was poorly edited – his accent was off, and his head and voice did not appear authentic upon close inspection. This deepfake video did not bring anything new and was quickly debunked by people outside Ukraine and Russia.
Although we could say that the deepfake video itself was not that dangerous, the current scenario in which this deepfake was inserted is what makes it extremely dangerous. The reason why people outside Ukraine and Russia were able to debunk it so easily was because they had steady internet connection, and quick access to multiple sources of information – which is not the case for Ukrainians and Russians.
The Russian government controls the local news media channels, and it has blocked access to every major social media platform, such as Facebook, Instagram, Twitter, TikTok and Telegram. Thus, it makes it really hard for the average Russian citizen to verify whether the video was real.
As for Ukrainians, the internet connection in Ukraine has been very spotty and unreliable due to the constant attacks to their telecommunication infrastructure by Russian troops. Ukrainians are primarily concerned about surviving, and don’t have the mental bandwidth to stop and verify information online – thus, the war scenario doesn’t allow them to engage with information as critically as they would in peaceful times.
Q. Is there a way to assess the damage from such videos?
Citron: One part of the damage is the way a well-timed deepfake can turn the tide in a war and inspire and justify physical attacks.
Another crucial part of the damage is what Chesney and I call the “liar’s dividend,” where the idea of fakery can be used to debunk real proof of destruction. The liar’s dividend is what Putin and other masters of disinformation mine to great effect to suggest that only the truths or videos that they say are real are real.
Nemer: It is hard to assess the damage of deepfakes due to how easily and widely they can be spread, and the convincing features embedded in them. They can cause short- and long-term social harms. I cite Ashish Jaiman in slides I share with my students: "Deepfakes can speed up the already declining trust in media. Such erosion can contribute to a culture of factual relativism, fraying the increasingly strained social fabrics of civil society."
Q. Would fear of responding too early to something that could be a deepfake give the perpetrator an advantage?
Citron: That is always the case. Deepfakes are highly effective because they spread like wildfire and are taken as true before they can be debunked. As my colleague, Dr. Wael Abd-Almageed of University of Southern California’s Visual Image Lab, tweeted in response to the Zelenskyy video, the problem is time – computer scientists are not afforded time to authenticate or debunk videos before they are shared and believed.
Q. What are some things to be looking for in deepfakes? Are they still distinguishable from reality in most cases, if you know the telltale signs?
Citron: Often, the best we have is context. More and more, deepfakery is sophisticated so that the untrained eye, and perhaps soon the most sophisticated person, can’t distinguish what is real from what is fake.
Nemer: Yes, they are still distinguishable from reality in most cases. Pay attention to the face, and check for voice-lips synchronization. High-end deepfake manipulations are almost always facial transformations, so also pay attention to the cheeks and forehead.
Q. Does this bode poorly for interference by Russia or others in our upcoming elections?
Citron: This isn’t great, to be frank. We must be vigilant, and the press must be vigilant in taking care to vet images and videos before sharing them and validating them, lest we swing elections, crash initial public offerings or aggravate attacks, due to deepfakery.
Nemer: Yes, given that they have been able to hack not only social media accounts, but also newscasts, it seems that they are becoming savvier on how to hack the whole information ecosystem.
Q. Any other thoughts you want to share?
Nemer: Deepfake, like any other technology, can be used for good and evil, but in order to fully understand its implications, we should approach it as a social actor embedded in a sociocultural network.
Citron: Welcome to the future. It is here, and it is not pretty.
Media Contact
Article Information
March 17, 2022
/content/qa-zelenskyy-surrender-hoax-feared-future-deepfakes-here