Deepfakes and the Law: an excerpt from my Society of Computers & Law Webinar
On Tuesday, 8 September 2020, I was delighted to give a webinar for the Society of Computers & Law entitled Me and my Deepfake: a closer look at image rights and our digital selves. SCL is a fantastic educational charity so, if you like what you see here, I would encourage you to join SCL or perhaps donate to their organisation.
My webinar was structured into three main sections: the technology behind deepfakes, the real world risks they can pose, and different laws which may be available to combat an unwanted deepfake. In this post, I’m going to share my slides from the second section – together with a (loose) transcript of what I said during the webinar.
Because the audience included lawyers, students, as well as those simply interested in tech, I tried to keep it engaging and accessible. For that reason, my presentation was neither overly technical nor academic. For my technical or academic writings on the subject and more, please do consider checking out my portfolio. With that caveat now out of the way, here are some of my slides and my speakers’ notes!
Let’s start out with some numbers. These are courtesy of Sensity.ai which works to detect deepfakes. They’ve put out a lot of really interesting research, so I would encourage you to look at their reports.
As I mentioned in the technology section (which contained information discussed here), GAN was invented in 2014, and by 2017 deepfakes were being used to generate image-based forms of sexual abuse. By the end of 2018 year, Sensity detected just under 8,000 deepfakes. A year later, this number nearly doubled to almost 15,000. As of June this year, we are at nearly 50,000 detected deepfakes.
Where are these deepfakes coming from? There are about 20 forums and online communities dedicated to making them, and from 13 of those, sensity counted about 100,000 unique members. (But do note my second batch of statistics, below!)
So, what are deepfakes actually doing as they float around cyber space? In addition to being good for a laugh, we know that they pose risks. But how do we conceptualise those risks?
Well, a group of very clever people at the International Risk Governance Workship I participated in last year came up with a very simple and elegant framework. The framework developed there essentially broke down risks into three categories: risks to society, risks to businesses, and risks to individuals.
Social risks are those which threaten societal cohesion, norms of trust and truth. We’re talking about political manipulation, deepfakes of politicians, big time scandals that would seek to derail national security, stoke mass panic, and so on. These are the ones which get a lot of attention in the media: headlines which say, this is the end of democracy.
These risks are real and the research focusing on them is very compelling. Several states including Texas and California have already made it a criminal offence to maliciously publish a deepfake depicting a candidate for political office within a certain period of time before an election.
But. We’re just not quite there…. Yet.
Videos of elected officials are subject to extreme scrutiny, and usually more than one camera is aimed at a politician at any given moment. Deepfakes made by your average person are not quite good enough yet to hold up to that scrutiny.
Secondly, we arguably don’t need political deepfakes because cheap fakes or shallow fakes do a pretty good job of distorting political opinion already. In fact, we see that fewer than 5% of deepfakes actually concern politicians.
We see manipulated videos, yes, but not deepfakes. For example, here is CNN reporter Jim Acosta asking Donald Trunp, who is off camera, a question at a press conference. In the middle of the question, Trump asked one of his staffers, this young lady here, to take the microphone away. The footage was sped up to make it appear as though Jim was karate chopping the lady’s arm.
Moving beyond social risks now, let’s look at businesses and individuals. These risks can be put into sub-categories, the first of which is financial.
These include stock price manipulation and insurance fraud. There are also financial risks posed to individuals of course, such as phishing and identity theft, so for instance. If you were on a Zoom call with your boss, and they asked you do to do something, I wonder how many of us would really hesitate to do that thing. If the video looked weird, maybe it’s just a bad connection. Or, maybe it’s a deepfake that someone has plugged into the video feed.
We’ve also already seen “deepfaked voices” wherein someone made a wire transfer of a substantial amount of money following a phone call with their company’s CEO. But, by and large, we already have a good legal framework to tackle these financial crimes. The fact that deepfakes are involved are largely just a method to achieve some other, illegal end.
The other subcategory of risk concerns reputational harm. This is the area that I’m particularly very interested in: Here we have brand damage, false endorsements, and all forms of harassment and defamation.
Before deepfakes came about, I was actually doing a fair amount of writing and research on reputation and image rights – notably with respect to influencers, actors, and others with personal brands. Thanks to social media, the image rights framework were getting a bit complicated, but now they are even moreso convoluted, thanks to deepfakes.
Let’s look at some more numbers. You may recall that I mentioned that only about 5% of detected deepfakes in the public sphere depict politicians. While another 10% depict business figures or those in miscelaneous categories, the overwhelming majority – 85% – of deepfakes typically depict actors, models, and athletes. Increasingly, influencers and social media personalities are also being targeted.
But wait. These are just the public deepfakes that have been picked up by Sensity’s detection software. This does not capture the deepfakes made on local computers and then shared on whatsapp or sent through a DM on twitter.
Deepfakes are available as a form of entertainment for anyone to make or enjoy. The software is free to download on BitTorrent and GitHub, and hundreds of YouTube tutorials provide step-by-step instructions. Some freelancers even sell bespoke deepfakes for as little as £5 per video on marketplaces such as Fivver. Mobile apps like ZAO, Doublicat, and AvengeThem can generate face-swapped videos using just one selfie as their source and even the mainstream apps Instagram and Snapchat have ‘filters’ which can easily do the same.
I made this deepfake featuring me as “Kat Stratford” in 10 things I hate about you — Using 1 selfie – and it took about 3 seconds to generate.
Admittedly, most deepfakes made casually and quickly are largely capable of detection. Here, in my cheap and fast deepfake, we see that my nose is thinner than Julia’s, my chin is different, the blue of my eyes is captured – as are the bags beneath them! But it’s still not great – it is however very good for having been made from a single source photo.
Importantly, the technology will undoubtedly continue to improve, and deepfakes will likely remain a popular phenomenon because no specialised technical knowledge is required. Besides, minor inconsistencies or glitches are no deterrent for those who make them for a laugh. Plus, even deepfakes which are not perfect can cause harm.
I’ve been speaking to several people involved in family law and domestic abuse research and I can tell you that already, people are using deepfakes to absolutely destroy lives. (See also Suzie Dunn’s research.) For example, an individual could use a deepfake to falsify evidence that is submitted to courts during child custody battles. This is a bit of a mood dampener but – it is important to mention.
Some people dismiss image rights as trivial, or only something that famous actors need to be concerned about. People have told me, “well, don’t put stuff on Instagram if you don’t want it to be shared or manipulated without your consent!” Ok, fair enough. That may be true, but it’s also a bit naïve.
The world is changing. Our lives – as evidenced especially through lockdown – are increasingly moving online. We are developing digital identities through our behaviour, our web browser history, our Twitter and LinkedIn profile photos, and so much more. The fact that a photograph is shared to thousands or indeed millions of people online does not make it any less personal for the person depicted.
If only one thing is remembered from today’s webinar, it should be that the law surrounding the use of someone’s image is incredibly complex, and that yes – it matters to all of us, not just celebrities.