This article was written by Tanya Petersen, who interviewed Kelsey Farish for the École polytechnique fédérale de Lausanne (Swiss Federal Institute of Technology Lausanne) magazine, Dimensions (Summer 2021). Dimensions offers a series of in-depth articles, interviews, portraits and news highlights, and is available in English and in French. Republished here with kind permission from EPFL. 

Kelsey Farish describes herself as an actor who got lost on her way to drama school and ended up at law school instead. Yet it was the marriage of these interests that sparked her focus on publicity law and how images can be appropriated for commercial purposes or endorsement. 

Now a London-based lawyer at international law firm DAC Beachcroft, Farish is one of Europe’s leading experts on deepfakes and advises clients on media, privacy and technology issues. “When I first encountered the technology in 2018 or so, and started writing about it, it was a happy accident: now I’m one of the few lawyers who actually specializes in the issues that arise with deepfakes, in particular the personality rights and persona rights framework,” she says.

Unwanted deepfakes clearly have a dark side. Shockingly, more than 90% of deepfake victims are women, who are subject to online sexual harassment or abuse through nonconsensual deepfake pornography. The motives range from “revenge porn” to blackmail. Deepfakes targeting politicians or political discourse make up less than 5% of those circulating online. This does change the debate around how we should approach or regulate deepfakes online.

More than 90% of deepfake victims are women, who are subject to online sexual harassment or abuse through nonconsensual deepfake pornography.

Tim Berners-Lee, the inventor of the World Wide Web, has warned that the growing crisis of online abuse and discrimination means the web is simply not working for women and girls, and that this threatens global progress on gender equality. Farish believes that regulation of deepfakes online is not fit for purpose to protect women, and she is spearheading efforts to bring this debate to the forefront.

“The issue with regulating deepfakes really comes down to the tensions between expression and regulation, and unless there’s a specific harm that’s delineated, for example, defamation, fraud or child exploitation, to name a few, you really can’t regulate it. So, for each and every deepfake that pops up, you have to look at it with a magnifying glass and say: OK, what’s going on here?” Farish says.

Recently she gave testimony to the European Parliament’s Science and Technology Options Assessment Panel on a comprehensive set of new rules for all digital services in the EU, including social media, online marketplaces and other online platforms under the Digital Services and Digital Markets Acts.

In the EU, individuals have the legal right to be forgotten, that is, to have private information removed from Internet searches and other directories under some circumstances. While this sounds positive, Farish recalls a conversation with the panel regarding whether this existing legal right could be used to tackle malicious deepfakes.

“I first thought to myself, that’s a great idea, just get the unwanted deepfakes taken down using a GDPR request. But then you have to ask, Who would women send this to? The person who made the deepfake in the first place? Say, a random guy in his mom’s basement in Oklahoma? Or to Snapchat or Facebook directly? These platforms often lack sufficient resources to quickly remove problematic posts. Facebook, for example, has 20,000 people working for it, moderating user-uploaded content, and yet we still have issues. And, from a more cynical perspective, it’s arguably in Facebook, Twitter and Snapchat’s commercial interests to keep crazy content online, because it drives advertising clicks.”

In combination with education campaigns from the classroom to the boardroom, Farish believes from a legal perspective that an important step in the battle against nonconsensual and harmful pornographic deepfakes online would be to recognize the right of digital personhood, a move that would require support from social media companies.

“An individual should be able to exercise autonomy and agency over their likeness in the digital ecosystem without needing the trigger of privacy or reputational harm or financial damage. In an ideal world, anyone should be able to get images taken down that they don’t consent to,” says Farish. “This is a gendered issue and speaks to the wider problem of exploiting the images of vulnerable people, whether they’re women or children, and people thinking that they can do whatever they want and get away with it. The right of digital personhood would need to be balanced against journalism and other freedom of speech considerations, but it could be a small paradigm shift,” she says.

Photo by Arun Kuchibhotla

Leave a Reply