DeepFakes and False Lights: what does the law say?

DeepFakes and False Lights: what does the law say?

What do Scarlett Johansson, cyber intelligence experts and some law makers have in common? Their shared concern about AI-generated videos. Known as “DeepFakes,” these videos can have damaging impact on reputations, emotional health, and even national security. But what is the legal status of this disruptive – and oftentimes disturbing – technology?

Deepfake – which combines “deep learning” and “fake” – is commonly defined as an artificial intelligence-based human image synthesis technique. Put simply, it’s a way to superimpose one face over another.

In December 2017, an anonymous Reddit user started a viral phenomenon by combining the machine learning software and AI to swap porn performers’ faces with those of famous actresses. Scarlett Johansson, one of the most highly-paid actresses in Hollywood, has herself been the victim of such “creations”. Speaking to the Washington Postshe explained that “nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired. There are basically no rules on the internet because it is an abyss that remains virtually lawless.”

It goes without saying that such fake porn videos can easily damage careers, emotional well-being, and a person’s sense of dignity and self-esteem. But there are other implications, too.

As a general starting point, it’s useful to have an understanding of what AI is – and isn’t. “Artificial Intelligence” is not another word for the robot overlords in Blade Runner or even Skynet’s Terminators. Rather, AI is fundamentally a machine-learning application whereby a computer is to fulfill a certain task on its own. What makes AI special is that machines are essentially “taught” to complete tasks that were previously done by humans, by doing the task over and over again.

With deepfakes, it doesn’t take long for the AI to learn the skill with eerie precision, and produce sophisticated (albeit artificial) images. The technology has many legitimate uses, especially in the film industry, where an actor’s face can be placed on their stunt double’s body. But thanks to continued advancement in the technology itself, the political and legal risks are higher than ever before.

On 29 January, US Director of National Intelligence Dan Coates spoke before the Senate Select Committee on Intelligence to deliver the Worldwide Threat Assessment, which had been compiled by the US intelligence community. The document sets out the biggest global threats in the following order: cyber, online influence operations (including election interference), weapons of mass destruction, terrorism, counterintelligence, emerging and disruptive technologies. 

Yes, cyber attacks and online influence operations are discussed before traditional weapons of mass destruction. The report even mentions deepfakes explicitly:

Adversaries and strategic competitors probably will attempt to use deep fakes or similar machine-learning technologies to create convincing—but false—image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners.

Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, explained that “we already struggle to track and combat interference efforts and other malign activities on social media — and the explosion of deep fake videos is going to make that even harder.” This is particularly relevant given the severe political polarization around the world today: from Brexit to Trump and everywhere in between, deepfakes could become powerful ways to spread more disinformation and distrust.

There are some legal remedies which may combat some of the more nefarious aspects of the deepfake. As explained by the International Association of Privacy Professionals (IAPP), in common law jurisdictions like the United States and the United Kingdom, the victim of a deepfake creation may be able to sue the deepfake’s creator under one of the privacy torts. By way of example, the false light tort requires a claimant to prove that the deepfake in question incorrectly represents the claimant, in a way that would be embarrassing or offensive to the average person.

Another potentially relevant privacy tort is that of misappropriation or the right of publicity, if the deepfake is used for commercial purposes. Consider, for example, if someone made a deepfake commercial of Meghan, the Duchess of Sussex endorsing a certain makeup brand. Since individuals generally do not own the copyright interest in their own image (i.e., the photograph or video used to make a deepfake) copyright law is not a good remedy to rely upon. Instead, Meghan could argue that the deepfake misappropriated her personality and reputation for someone else’s unauthorised commercial advantage. However, it’s important to note that personality rights are frustratingly nebulous here in the United Kingdom, as I explained in Fame and Fortune: how celebrities can protect their image

Depending on the nature of the deepfake, a victim may also be able to sue for the intentional infliction of emotional distress, cyberbullying, or even sexual harassment. But in many instances, the burden of proof to establish these claims can be a notoriously difficult standard to meet.

Furthermore, the practical challenges of suing the creator of a deepfake are considerable. Firstly, such creators are often anonymous or located in another jurisdiction, which makes legal enforcement very difficult. Although a victim could request that the creator’s internet company (ISP) remove the deepfake, establishing what is known as “online intermediary liability” and forcing an ISP to get involved can be an uphill battle in and of itself (this was the topic of one of my papers in law school). As for the victim exercising their right to be forgotten under the EU’s General Data Protection Regulation (Article 17, GDPR), the same problem arises: who is responsible for taking down the deepfake?

Secondly, the creator may lodge a defense of free speech or creative expression, especially if the deepfake victim is a political figure or otherwise in the public spotlight. This may beg the question, to what extent is a deepfake depicting a member of parliament any different from a satirical cartoon or parody? Unless the deepfake is outrageously obscene or incites actual criminal behaviour, it may be nearly impossible to take legal action.

Deepfakes are but one of many instances where the law has not quite kept up with the rapid development of new technology. Although issues like these keep technology lawyers like myself employed, the potential for genuine harm caused by deepfakes in the wrong hands cannot be overstated. It should be fairly clear that outlawing or attempting to ban deepfakes is neither possible nor desirable, but perhaps increased regulation is a viable option. Deepfakes could be watermarked or labelled before being shared by licensed or regulated entities (for example, news organisations) much in the same way that airbrushed models in advertisements are labelled in France. Doing so may at least slow down the proliferation of deepfakes purporting to be genuine.

But until then, the only advice remains that you shouldn’t believe everything you read – or see, or hear – online.

Update 14 June 2019: The European Commission has released a joint Report on the implementation of the Action Plan Against Disinformation – available here.

UK regulator to investigate social media influencers

UK regulator to investigate social media influencers

A number of celebrities and social media stars are being investigated by the Competition and Markets Authority, which says it has concerns that some influencers are failing to disclose that they are being paid for their endorsements.

In the early days of social media, Instagram and Facebook were seen as ways to connect with those closest to us, and to provide an insight into our private lives. Today however, models and celebrities can make thousands (if not hundreds of thousands) of dollars with every photo they post, simply by featuring a product in their image. This nuanced form of targeted marketing deliberately blurs the line between “advertising” and “personal” sharing, and it’s big business. According to the Financial Times, Instagram influencers earned more than $1bn (£770m) in 2017.

Related image

Pictured here is Chiara Ferragni, Italian fashion writer, influencer, businesswoman; and the first-ever blogger to be the focus of a Harvard Business School case study. Is this post of hers an advertisement, or is she just sharing the love?

Under American law, companies who work with influencers (defined as “key individuals with significant social media followings”) to promote products, services, or brands must follow certain rules, many of which are set out in Title XVI (Commercial Practices) of the Code of Federal Regulations. In particular, when there exists a connection between the endorser and the seller of the advertised product that might materially affect the weight or credibility of the endorsement, such connection must be fully disclosed. (16 C.F.R. §§ 255.0-255.5).

In practice, this means that when a company pays an individual – either in cash, or through discounts, free travel, or products – the company and influencer should enter a written contract. The contract should oblige the influencer to both “disclose its material connection to the advertiser clearly and conspicuously,” as well as “refrain from making any false or misleading statements about the products and services.”

Related image

nearly identical post to Chiara’s above, but Victoria at inthefrow here has included #ad. Is that clear and conspicuous enough?

Here in the United Kingdom, where influencers are paid to promote, review or talk about a product on social media, the law requires that this must be made clear. The use of editorial content that promotes a product –also known as “advertorials” or “native advertising”– must clearly identify that the company has paid for the promotion.

Earlier this month, the Competition and Markets Authority (CMA) launched an investigation into whether consumers are being misled by celebrities who do not make clear that they have been paid, or otherwise rewarded, to endorse products online. In its press release, the CMA announced that it has already written to a range of celebrities and social media influencers to request information about their posts and the nature of the agreements they have in place with brands. This comes just weeks after Made in Chelsea star Louise Thompson was slapped on the wrist for failing to disclose an Instagram post as a paid-for advertisement for watchmaker Daniel Wellington.

The regulator is also asking consumers to share their experiences, and says it would “particularly benefit from hearing from people who have bought products which were endorsed on social media.”

Related image
Notice that this post says at the top, “paid partnership with.” Is that better than #ad?

The investigation is being carried out under Part 8 of the Enterprise Act 2002 in respect of potential breaches of the Consumer Protection from Unfair Trading Regulations 2008. If an influencer ignores the CMA’s requests to comply with the law, an enforcement order in court. As for next steps, breaching such an order can lead to an unlimited fine or a jail term of up to two years. However, examples of meaningful penalties are still almost non-existent.

What do you think? Are influencer adverts easy enough to spot, without the hashtags and caveats? Interestingly, a study by Bazaarvoice and Morar Research found that nearly half of the 4,000 UK consumers polled are “fatigued” by repetitive influencer content. The majority also said they felt influencers were publishing content that was “too materialistic” and “misrepresented real life.” Notwithstanding this, the World Federation of Advertisers reported that 65% of multinational brands plan to increase their influencer investment. Perhaps there’s truth in what Chiara herself once quipped: “some loved me, some hated me—but they all followed me.”

 

Interested in this topic? Be sure to check out The Fashion Law’s Annual Brand and Influencer Report: The Good, Bad, and Highly Problematic. Featured photo above is Lena Perminova at Paris Fashion Week Autumn/Winter 2018 | Source: Getty Images