The DeepBrief — April 2021
Backed by popular demand, here are the news stories and research papers about deepfakes that have recently caught my attention. I hope you find this first edition of what may (hopefully!) become a monthly series useful.
• EU politicans tricked by deepfakes of Russian opposition leaders. A series of senior European and UK politicans were approached in late April by individuals who appear to be using deepfake filters to imitate Russian opposition figures. Tom Tugendhat, the chair of the UK foreign affairs select committee, said the fake meetings were an attempt to discredit the jailed opposition leader, Alexei Navalny, and his supporters. As reported by The Guardian, Richard Kols – who chairs the foreign affairs committee of Latvia’s parliament – admitted, “It is clear that the so-called truth decay or post-truth and post-fact era has the potential to seriously threaten the safety and stability of local and international countries, governments and societies.”
• Fraudsters in China use deepfakes to fool financial services biometric checks. The South China Morning Post reported that a group of tax scammers hacked a government-run identity verification system to fake tax invoices, valued at USD $76.2 million (£54.7m). The fraudsters used a basic app to manipulate the images and create deepfake videos, that made it seem as if the faces were blinking, nodding, or opening their mouths. They then used a special phone to hijack the mobile camera typically used to perform facial recognition checks. In doing so, they were able to trick the tax invoice system into accepting the premade deepfake videos, which were good enough to beat the liveness detection check.
• Deepfakes and Cheerleader rivalries. In March, reports surfaced of a woman in the United States using photographs of her daughter’s classmates to create a disturbing deepfake video. As explained by the BBC, Raffaele Spone allegedly depicted the girls naked, drinking, and smoking, and then shared the video with the cheerleading coach. Spone hoped the girls would then face disciplinary action, and be kicked off of the high school cheerleading team. It is understood that she hoped the move would benefit her own daughter, who was apparently unaware of the plan. I was interviewed by Claudia-Liza Armah for Channel 5 News Tonight on 16 March 2021 to discuss the case.
• Geographers warn that deepfake technology could be used to manipulate sattelite imagery. Researchers from the University of Washington (in my hometown of Seattle!) warn of the dangers of falsified geospatial data and call for a system of geographic fact-checking. The work challenges the general assumption of the “absolute reliability of satellite images or other geospatial data,” said Professor Bo Zhao, and certainly as with other media that kind thinking has to go by the wayside as new threats appear. You can read the full paper at the journal Cartography and Geographic Information Science. Hat tip to my cousin and UW PhD student, Marlena, for the story!
• DeepNostalgia divides opinion on reanimating ancestors. MyHeritage.com proudly exclaims that you can “animate the faces in your family photos with amazing technology” to “experience your family history like never before”. Over 1 million photos were animated in the first 48 hours following the launch of the service. Although some users on social media described the tool as a bit creepy, others were understandably moved by the experience, as shown on the MyHeritage Blog.
• Macarena for the apocalypse. To mark Earth Day on 22 April, Channel 4 released a deepfake video of climate crisis activist Greta Thunberg teaching fans the “Man Like Greta” TikTok dance. The deepfake is performed by impressionist Katia Kvinge and written by Stu Richards and Alasdair Beckett-King, and shows the Swedish environmental activist dancing to an original hip-hop song about climate change. Catchy lyrics include “ice melts / seas rise / what if everybody dies?” The video also includes a “how-to” tutorial.
According to Google, 525 academic papers have been published about deepfakes in 2021 thus far. For a bit of context, just 19 papers mentioned deepfakes in 2016, and grew to 169 in 2017!
• Deepfake Privacy: Attitudes and Regulation. Writing for the Northwestern University Law Review, Prof. Matthew Kugler and Carly Pace present a novel empirical study that assesses public attitudes towards non-consensual deepfake pornography. Their representative sample viewed nonconsensually-created pornographic deepfake videos as extremely harmful and overwhelmingly wanted to criminalize them. Kugler and Pace argue that prohibitions on deepfake pornographic videos should receive the same treatment under the First Amendment (the United States’ fundamental freedom of speech law) as prohibitions on nonconsensual pornography, rather than being dealt with under the less-protective law of defamation. You can download the paper, published in February 2021, at SSRN.
“I Found a More Attractive Deepfaked Self”: The Self-Enhancement Effect in Deepfake Video Exposure. Published in the journal of Cyberpsychology, Behavior, and Social Networking, Fuzhong Wu, Yueran Ma, and Zheng Zhang at Tsinghua University in Beijing explore how deepfakes can alter body image – for the better. Young women are no longer passive viewers of attractive celebrities: thanks to mobile apps like ZAO, they are able to become part of the perfect images. This study investigated the impact of viewing the self-celebrity deepfaked videos (SCDV) on young female users’ appearance self-evaluation (i.e., body image and state appearance self-esteem). The result? Results showed that participants in the SCDV condition perceived themselves as more physically attractive, experienced greater satisfaction with their own facial features, and reported marginally higher state appearance self-esteem! Surprised? I was too! This study reveals the potential of deepfake technology as an intervention technique for body image disturbances.
• Political Deepfakes Are As Credible As Other Fake Media And (Sometimes) Real Media. In an impressive 60+ page paper published on GitHub, Soubhik Barari, Christopher Lucas and Kevin Munger demonstrate that deepfakes of public officials are credible to a large portion of the American public – up to 50% of a representative sample of 5,750 subjects. However, this is no more than equivalent misinformation in extant modalities like text headlines or audio recordings. In other words, deepfake scandal videos are no more credible or emotionally appealing than comparable fake media. The paper will appeal to those who like lots of statistical analysis!
• Deepfake in Face Perception. Psychologists at the Moscow Institute of Psychoanalysis and Moscow State University of Psychology & Education have studied how realistic deepfakes with strange facial expressions impact the viewer’s emotional perception. The study on image synthesis technology like that used in deepfake generation significantly expands the possibilities of psychological research of interpersonal perception. According to the researchers, the use of deepfakes simplifies the creation of “impossible face” stimulus models which are necessary for in-depth study of representations of the human inner world, and creates a need for new experimental-psychological procedures. The 15-page paper is in Russian but free to download (and then translate). The abstract is in English.
• A Survey on Deepfake Video Detection. Writing for The Institution of Engineering and Technology (an Open-Access academic library), Chinese researchers Peipeng Yu, Zhihua Xia, Jianwei Fei and Yujiang Lu have published a very helpful overview of the current research status of deepfake video detection. Namely, these include: general network-based methods; temopral consistency-based methods; visual artefacts-based methods; camera fingerprint-based methods; and biological signal-based methods. As current detection methods are still insufficient to be applied in real scenarios, their research concludes that future deepfake detection methods should pay more attention to generalization and robustness.
• Deepfake Videos in the Wild: Analysis and Detection. Eight researchers from Virginia Tech, the University of Virginia, the University of Michigan, LUMS Pakistan, and Facebook have published a really interesting article on the analysis and detection of deepfakes. You can download the .pdf here, or you can watch their 15 minute video summary on YouTube. To carry out their research, the team collected the largest dataset of deepfake videos “in the wild”, containing 1,869 videos from YouTube and Bilibili, and extracted over 4.8M frames of content. They then analysed the growth patterns, popularity, creators, manipulation strategies, and production methods of deepfake content in the real-world. Thirdly, they systematically evaluated existing defenses, and observed that they are not ready for deployment in the real-world. Fourthly, the explored the potential for transfer learning schemes and competition-winning techniques to improve defenses.
• A Machine Learning Based Approach for Deepfake Detection in Social Media Through Key Video Frame Extraction. In this paper, Alakananda Mitra, Saraju P. Mohanty, and Elias Kougianos of the University of North Texas, together with Peter Corcoran of the National University of Ireland, address the social and economical issues due to fake videos in social media. The paper is highly technical, but worth looking through if detecting social media deepfake videos is of interest. In particular, the researchers introduced an algorithm which cuts down the computational burden significantly.