DeepFakes and False Lights: what does the law say?

DeepFakes and False Lights: what does the law say?

What do Scarlett Johansson, cyber intelligence experts and some law makers have in common? Their shared concern about AI-generated videos. Known as “DeepFakes,” these videos can have damaging impact on reputations, emotional health, and even national security. But what is the legal status of this disruptive – and oftentimes disturbing – technology?

Deepfake – which combines “deep learning” and “fake” – is commonly defined as an artificial intelligence-based human image synthesis technique. Put simply, it’s a way to superimpose one face over another.

In December 2017, an anonymous Reddit user started a viral phenomenon by combining the machine learning software and AI to swap porn performers’ faces with those of famous actresses. Scarlett Johansson, one of the most highly-paid actresses in Hollywood, has herself been the victim of such “creations”. Speaking to the Washington Postshe explained that “nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired. There are basically no rules on the internet because it is an abyss that remains virtually lawless.”

It goes without saying that such fake porn videos can easily damage careers, emotional well-being, and a person’s sense of dignity and self-esteem. But there are other implications, too.

As a general starting point, it’s useful to have an understanding of what AI is – and isn’t. “Artificial Intelligence” is not another word for the robot overlords in Blade Runner or even Skynet’s Terminators. Rather, AI is fundamentally a machine-learning application whereby a computer is to fulfill a certain task on its own. What makes AI special is that machines are essentially “taught” to complete tasks that were previously done by humans, by doing the task over and over again.

With deepfakes, it doesn’t take long for the AI to learn the skill with eerie precision, and produce sophisticated (albeit artificial) images. The technology has many legitimate uses, especially in the film industry, where an actor’s face can be placed on their stunt double’s body. But thanks to continued advancement in the technology itself, the political and legal risks are higher than ever before.

On 29 January, US Director of National Intelligence Dan Coates spoke before the Senate Select Committee on Intelligence to deliver the Worldwide Threat Assessment, which had been compiled by the US intelligence community. The document sets out the biggest global threats in the following order: cyber, online influence operations (including election interference), weapons of mass destruction, terrorism, counterintelligence, emerging and disruptive technologies. 

Yes, cyber attacks and online influence operations are discussed before traditional weapons of mass destruction. The report even mentions deepfakes explicitly:

Adversaries and strategic competitors probably will attempt to use deep fakes or similar machine-learning technologies to create convincing—but false—image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners.

Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, explained that “we already struggle to track and combat interference efforts and other malign activities on social media — and the explosion of deep fake videos is going to make that even harder.” This is particularly relevant given the severe political polarization around the world today: from Brexit to Trump and everywhere in between, deepfakes could become powerful ways to spread more disinformation and distrust.

There are some legal remedies which may combat some of the more nefarious aspects of the deepfake. As explained by the International Association of Privacy Professionals (IAPP), in common law jurisdictions like the United States and the United Kingdom, the victim of a deepfake creation may be able to sue the deepfake’s creator under one of the privacy torts. By way of example, the false light tort requires a claimant to prove that the deepfake in question incorrectly represents the claimant, in a way that would be embarrassing or offensive to the average person.

Another potentially relevant privacy tort is that of misappropriation or the right of publicity, if the deepfake is used for commercial purposes. Consider, for example, if someone made a deepfake commercial of Meghan, the Duchess of Sussex endorsing a certain makeup brand. Since individuals generally do not own the copyright interest in their own image (i.e., the photograph or video used to make a deepfake) copyright law is not a good remedy to rely upon. Instead, Meghan could argue that the deepfake misappropriated her personality and reputation for someone else’s unauthorised commercial advantage. However, it’s important to note that personality rights are frustratingly nebulous here in the United Kingdom, as I explained in Fame and Fortune: how celebrities can protect their image

Depending on the nature of the deepfake, a victim may also be able to sue for the intentional infliction of emotional distress, cyberbullying, or even sexual harassment. But in many instances, the burden of proof to establish these claims can be a notoriously difficult standard to meet.

Furthermore, the practical challenges of suing the creator of a deepfake are considerable. Firstly, such creators are often anonymous or located in another jurisdiction, which makes legal enforcement very difficult. Although a victim could request that the creator’s internet company (ISP) remove the deepfake, establishing what is known as “online intermediary liability” and forcing an ISP to get involved can be an uphill battle in and of itself (this was the topic of one of my papers in law school). As for the victim exercising their right to be forgotten under the EU’s General Data Protection Regulation (Article 17, GDPR), the same problem arises: who is responsible for taking down the deepfake?

Secondly, the creator may lodge a defense of free speech or creative expression, especially if the deepfake victim is a political figure or otherwise in the public spotlight. This may beg the question, to what extent is a deepfake depicting a member of parliament any different from a satirical cartoon or parody? Unless the deepfake is outrageously obscene or incites actual criminal behaviour, it may be nearly impossible to take legal action.

Deepfakes are but one of many instances where the law has not quite kept up with the rapid development of new technology. Although issues like these keep technology lawyers like myself employed, the potential for genuine harm caused by deepfakes in the wrong hands cannot be overstated. It should be fairly clear that outlawing or attempting to ban deepfakes is neither possible nor desirable, but perhaps increased regulation is a viable option. Deepfakes could be watermarked or labelled before being shared by licensed or regulated entities (for example, news organisations) much in the same way that airbrushed models in advertisements are labelled in France. Doing so may at least slow down the proliferation of deepfakes purporting to be genuine.

But until then, the only advice remains that you shouldn’t believe everything you read – or see, or hear – online.

Update 14 June 2019: The European Commission has released a joint Report on the implementation of the Action Plan Against Disinformation – available here.

Airbrushing history? Photos of Oxford Student Celebrations Raise Questions About Privacy Rights and Journalism

Airbrushing history? Photos of Oxford Student Celebrations Raise Questions About Privacy Rights and Journalism

A former Oxford University student asked image agency Alamy to remove photographs of her celebrating the end of exams. Now, the photographer accuses Alamy of “censoring the news”.  Is this a threat to freedom of the press, or has the woman’s human right of privacy been correctly protected?

The end of exams are a liberating and happy time for university students around the world. At Oxford, students take their celebrations to another level by partying en masse in the streets, covering each other in champagne, shaving foam, confetti, flour and silly string in a tradition known as “Trashing.”

Screenshot 2018-10-14 at 9.37.21 AM
An Alamy photo of Oxford celebrations from 1968. “Trashing” has become a bit more crazy since the 1990’s.

Speaking to the Press Gazette, Photographer Greg Blatchford explained that during the 2014 Trashing, a student invited him to take photographs of her celebrating on the public streets. Some of the images show her swigging from a bottle of champagne, while in others she is covered in silly string.

Blatchford then sent “about 20” images to Alamy as news content. The former student subsequently stated that she “loved” the images in email correspondence to Blatchford, and even shared them on Facebook. This summer, four years later, the woman contacted Alamy to have the photos deleted. The company removed the images – much to Blatchford’s dismay.

Screenshot 2018-10-14 at 9.37.58 AM
An Alamy stock image of Oxford University Trashing celebrations. Note: THIS IS NOT ONE OF THE SUBJECT PHOTOGRAPHS.

The right to be forgotten under the GDPR

Because the woman was able to be identified from the photographs, they constitute “personal data” as defined by Article 4 of the General Data Protection Regulation (GDPR). Under Article 17 GDPR, data subjects have the right in certain circumstances to compel the erasure of personal data concerning him or her.

For example, if the data was originally collected or used because the individual gave their consent, and that consent is subsequently withdrawn, the company may honour the request for deletion (Article 17(1)(b)). However, a company can also use a “counter attack” if an exception applies. Importantly for news and media agencies, if keeping the data is necessary for exercising the right of freedom of expression and information, they may be able to refuse the request and keep the data (Article 17(3)(a)).

For more details on how the right to be forgotten works in practice, see my earlier post, Now You’re Just Somebody That I Used to Know.

Are journalists under threat from privacy lawyers?

Blatchford explained that although they are now considered “stock images,” they were originally “news” photos and should not have been removed. By deleting the photos, Alamy “are censoring the news. I’m incensed that someone can influence news journalism and censor the past where clearly if photographs are taken in public, with the full consent of participants they can turn around and say ‘sorry, that’s not news’ later. This sets a precedent for anybody to walk up to a news organisation and say I don’t like the pictures of me. Journalists will then start feeling the threat of lawyers.”

In a statement to the Press Gazette, Alamy’s director of community Alan Capel said the images were submitted as news four years ago, but moved 48 hours later to the stock collection. “Therefore we are surprised that this is deemed to be ‘censoring the news.’ As per our contract with our contributors, we can remove any images from our collection if we see a valid reason to do so.”

The university said that participating in trashing can lead to fines and disciplinary action since it is against the university’s code of conduct
The comical images of students wearing sub fusc (formal academic attire) while partying are often published in newspapers around the country in May.

Privacy and press freedom have long been considered competing interests, but that’s not to say that striking an appropriate balance between the two is impossible.

On some level, I do sympathise with the photographer. I also struggle to buy Alamy’s argument that the images are not “news content” and are now “stock images.” The classification of an image should be based on its context, purpose and subject matter – not the time that has elapsed since the event, nor the label attributed to it on a website.

Stock images are, by definition, professional photographs of common places, landmarks, nature, events or people. By contrast, the Oxford Trashing photos are attributed to a specific time (May), place (Oxford), category of people (students), and event (celebrating the end of exams). They are popular for several reasons. Firstly, they illustrate a charming and comical juxtaposition. Although these students attend one of the oldest and most prestigious Universities in the world, they are – after all – entitled to a bit of fun. Secondly, Trashing has received increased press attention in recent years, as students have become subject to complaints fines, disciplinary action, and even police enforcement. These images clearly show, in ways that words alone cannot, matters of public interest.

Screenshot 2018-10-14 at 1.04.41 PM.png

In this particular instance however, I think Alamy have made the right decision in deleting the images.

Although the Press Gazette does not name the woman, it does note she is “a marketing director in New York.” It’s entirely plausible that she has valid concerns that the images of her participating in Trashing may negatively impact her reputation and career, or otherwise cause some sort of harm or embarrassment.

She claims that “there was no consent given to publish or sell my photos anywhere. I am not a model nor have given permission to any photographers to take photos of me to publicly display or to sell. This was a complete breach of privacy.” This contradicts what the email records show, but even if she had lawfully consented to the photographs being taken at the time, she is entirely within her rights to now withdraw consent. 

On balance, Alamy probably has dozens – if not hundreds – of images from the 2014 Trashing at Oxford. The likelihood that the images of this woman in particular are somehow especially newsworthy is minimal. Had Alamy refused to delete the photos, the woman would have been entitled to raise a complaint with the Information Commissioner’s Office. ICO enforcement action can include injunctions, sanctions, or monetary fines. Furthermore, Alamy would risk becoming known as an organisation that doesn’t care about privacy laws, thereby damaging its reputation.

Contrary to Blatchford’s concerns, it is doubtful that an organisation would delete a genuinely newsworthy image, simply because someone doesn’t like how they look. The right to be forgotten is not an absolute right to be purged from history, but a right to regain control of how information about you appears online.

For more details on how the right to be forgotten works in practice, see my earlier post, Now You’re Just Somebody That I Used to Know. If you’re interested in how celebrities control images of themselves, see Fame and Fortune: How do Celebrities Protect Their Image?

Header image by Alex Krook via Flickr

Fame and fortune: how do celebrities protect their image?

Fame and fortune: how do celebrities protect their image?

Famous movie stars and athletes earn big bucks beyond their day job at the studio or stadium. Their image can be used to in a variety of commercial contexts, ranging from endorsements and sponsorships, to merchandising and deals with fashion brands and magazines. Marketwatch reports that on average, signing a celebrity correlates to a rise in share prices, and a 4% increase in sales. After Chanel signed Nicole Kidman in 2003 to promote their N°5 perfume, global sales of the fragrance increased by 30%.

Celebrities today spend a huge amount of time and energy developing and maintaining their public image. But here in the United Kingdom, “image rights” have never been clearly stated in law. So how do celebrities protect and control the publicity associated with their name, image, and brand?

Continue reading “Fame and fortune: how do celebrities protect their image?”

Reputation: Taylor Swift’s protections under American and English defamation law

Reputation: Taylor Swift’s protections under American and English defamation law

this post is featured on the University of the Arts London’s intellectual property blog, creativeIP.org

♫♬ Now we’ve got problems / and I don’t think we can solve them (without lawyers…)

The right to freedom of expression is not an absolute right: there are certain restrictions in place to protect an individual’s reputation. But those restrictions vary significantly, depending on which side of the Atlantic you’re on. Considering the shared legal traditions of the United States and Great Britain, their differences on the issue of free speech is surprising. 

In early September, PopFront published an article entitled “Swiftly to the alt-right: Taylor subtly gets the lower case kkk in formation.” Exploring the singer’s (somewhat convoluted, if not contrived) connections to the American alt-right, PopFront suggests Swift’s song “Look What You Made Me Do” resonates with Breitbart readers, Trump supporters, and white supremacists, et al. The article also shows a screenshot from Swift’s music video juxtaposed with a photo of Hitler, noting that “Taylor lords over an army of models from a podium, akin to what Hitler had in Nazi Germany.”

Continue reading “Reputation: Taylor Swift’s protections under American and English defamation law”