top of page
Writer's pictureKelsey Farish

Political Deepfakes: social media trend or genuine threat?

In the BBC’s new televised drama The Capture, hackers intercept a broadcast interview with a government minister and replace his appearance with that of a deepfake.  The strikingly similar digital clone goes on to falsely endorse a major technology contract with an overseas firm, leading to serious national security implications.


Several years ago, such a scenario may have been constrained to the world of science fiction. But today, the threat is very real.


At the start of the Russian invasion of Ukraine in March 2022, a video of Ukrainian president Volodymyr Zelenskyy appeared on social media, instructing his citizens to lay down their weapons. Moments later, the real Zelenskyy posted a Facebook video to debunk the earlier message as a manipulated fake, and experts noted this particular example was a relatively unrealistic deepfake. Nevertheless, in time such attempts will only become more pervasive and difficult to spot as the technology improves.


Of course, it is important to remember that deepfakes can be used for creative, educational or research purposes, to include within the audiovisual sector and various medical fields. Putting this category aside however, it is clear that when used by nefarious actors, synthetic (AI-generated) media can pose a significant risk to individuals, businesses, and societies more generally.


The overwhelming majority of harmful deepfakes - more than 90% - fall into the category of image based sexual abuse, often referred to as “deepfake porn”. These videos almost always target women, and pose significant risks to safety, dignity, reputation, and mental health, as discussed in Deepfakes and their impact on women.


This troubling statistic notwithstanding, there is a growing concern about those forms of synthetic media which could be used in a political context, to harm democratic processes or otherwise disrupt government policy. Whilst deepfakes as a form of electoral interference were few and far between during the 2020 presidential elections in the United States, evidence suggests that in just the last two years, synthetic media technology has continued to improve, and become easier to both create and access.



Image courtesy Maxim Ilyahov


Such technology goes beyond deepfakes and other forms of face swapped videos. Text-to-image synthesis tools like Open A.I.’s DALL-E and Google Brain’s Imagen can turn any string of descriptive words into realistic photos within just a few seconds. Because users with no coding or photoshopping skills can generate such an image using a very simple website, mass proliferation of synthetic media is anticipated in the coming years.

If manipulated images work their way into the mainstream before being detected, it is easy to imagine how they could disrupt decision-making processes of the electorate. We could see fake images of politicians in compromising personal situations or, as dramatised in The Capture, make false statements about national security. We could also see an increase in the “liar’s dividend” phenomenon, whereby an authentic video is discredited as being “fake news”.


Each of these forms of misinformation and disinformation has the potential to disrupt and distort an already complicated media ecosystem. Whilst an individual video clip may be easily debunked upon close inspection, the cumulative effect contributes to distrust in journalism, government, and other public institutions.


What can be done about political deepfakes?


Regulation of political deepfakes presents a significant challenge. In most jurisdictions, any legislation that would seek to ban deepfakes of political officials or candidates would require significant carve-outs to protect freedom of expression of its creator(s). Such a carve-out would, for example, permit satirical deepfakes, or those generated for artistic - rather than manipulative - purposes. This requirement in turn poses a challenge for lawmakers, as art and satire are often subjective, and can be used for unintended purposes.

In the U.S. state of Texas, it is an offence to create and distribute a deepfake which is “politically motivated” to “sabotage” a candidate or an election. However, it remains to be seen how effective the law, introduced as TX SB751, is in practice.


In addition or in the alternative to government legislation, social media platforms can prohibit or otherwise limit the use of deepfakes on their websites and apps. When a deepfake is detected, content reviewers can remove the video, or label it as manipulated. However, these solutions are often very time-consuming and costly, and far from perfect. Both human as well as AI-assisted content review systems can overlook deepfakes on the one hand, and incorrectly remove or flag genuine images on the other.


Fortunately, several workable solutions are beginning to emerge from academic researchers, as well as established technology companies such as Adobe and Microsoft, and newer scaling companies like Truepic. Some tools generate certain metadata (e.g. “watermarks”) at the point of creation, so that it can be verified as authentic at a later date, whereas other tools seek out anomalies and other digital artefacts to help users identify manipulated images. And, in February 2021, the "Coalition for Content Provenance and Authenticity" (C2PA) was founded by Adobe, together with arm, BBC, Intel, Microsoft and Truepic. The C2PA seeks to formulate open, royalty-free technical standards to fight against disinformation.


In time, we may see solutions such as these adopted at scale, across a variety of mediums. Until then, digital media literacy remains more important than ever, because we cannot always believe what we see.

16 views0 comments

Comments


bottom of page