DeepFakes and False Lights: what does the law say?

DeepFakes and False Lights: what does the law say?

What do Scarlett Johansson, cyber intelligence experts and some law makers have in common? Their shared concern about AI-generated videos. Known as “DeepFakes,” these videos can have damaging impact on reputations, emotional health, and even national security. But what is the legal status of this disruptive – and oftentimes disturbing – technology?

Deepfake – which combines “deep learning” and “fake” – is commonly defined as an artificial intelligence-based human image synthesis technique. Put simply, it’s a way to superimpose one face over another.

In December 2017, an anonymous Reddit user started a viral phenomenon by combining the machine learning software and AI to swap porn performers’ faces with those of famous actresses. Scarlett Johansson, one of the most highly-paid actresses in Hollywood, has herself been the victim of such “creations”. Speaking to the Washington Postshe explained that “nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired. There are basically no rules on the internet because it is an abyss that remains virtually lawless.”

It goes without saying that such fake porn videos can easily damage careers, emotional well-being, and a person’s sense of dignity and self-esteem. But there are other implications, too.

As a general starting point, it’s useful to have an understanding of what AI is – and isn’t. “Artificial Intelligence” is not another word for the robot overlords in Blade Runner or even Skynet’s Terminators. Rather, AI is fundamentally a machine-learning application whereby a computer is to fulfill a certain task on its own. What makes AI special is that machines are essentially “taught” to complete tasks that were previously done by humans, by doing the task over and over again.

With deepfakes, it doesn’t take long for the AI to learn the skill with eerie precision, and produce sophisticated (albeit artificial) images. The technology has many legitimate uses, especially in the film industry, where an actor’s face can be placed on their stunt double’s body. But thanks to continued advancement in the technology itself, the political and legal risks are higher than ever before.

On 29 January, US Director of National Intelligence Dan Coates spoke before the Senate Select Committee on Intelligence to deliver the Worldwide Threat Assessment, which had been compiled by the US intelligence community. The document sets out the biggest global threats in the following order: cyber, online influence operations (including election interference), weapons of mass destruction, terrorism, counterintelligence, emerging and disruptive technologies. 

Yes, cyber attacks and online influence operations are discussed before traditional weapons of mass destruction. The report even mentions deepfakes explicitly:

Adversaries and strategic competitors probably will attempt to use deep fakes or similar machine-learning technologies to create convincing—but false—image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners.

Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, explained that “we already struggle to track and combat interference efforts and other malign activities on social media — and the explosion of deep fake videos is going to make that even harder.” This is particularly relevant given the severe political polarization around the world today: from Brexit to Trump and everywhere in between, deepfakes could become powerful ways to spread more disinformation and distrust.

There are some legal remedies which may combat some of the more nefarious aspects of the deepfake. As explained by the International Association of Privacy Professionals (IAPP), in common law jurisdictions like the United States and the United Kingdom, the victim of a deepfake creation may be able to sue the deepfake’s creator under one of the privacy torts. By way of example, the false light tort requires a claimant to prove that the deepfake in question incorrectly represents the claimant, in a way that would be embarrassing or offensive to the average person.

Another potentially relevant privacy tort is that of misappropriation or the right of publicity, if the deepfake is used for commercial purposes. Consider, for example, if someone made a deepfake commercial of Meghan, the Duchess of Sussex endorsing a certain makeup brand. Since individuals generally do not own the copyright interest in their own image (i.e., the photograph or video used to make a deepfake) copyright law is not a good remedy to rely upon. Instead, Meghan could argue that the deepfake misappropriated her personality and reputation for someone else’s unauthorised commercial advantage. However, it’s important to note that personality rights are frustratingly nebulous here in the United Kingdom, as I explained in Fame and Fortune: how celebrities can protect their image

Depending on the nature of the deepfake, a victim may also be able to sue for the intentional infliction of emotional distress, cyberbullying, or even sexual harassment. But in many instances, the burden of proof to establish these claims can be a notoriously difficult standard to meet.

Furthermore, the practical challenges of suing the creator of a deepfake are considerable. Firstly, such creators are often anonymous or located in another jurisdiction, which makes legal enforcement very difficult. Although a victim could request that the creator’s internet company (ISP) remove the deepfake, establishing what is known as “online intermediary liability” and forcing an ISP to get involved can be an uphill battle in and of itself (this was the topic of one of my papers in law school). As for the victim exercising their right to be forgotten under the EU’s General Data Protection Regulation (Article 17, GDPR), the same problem arises: who is responsible for taking down the deepfake?

Secondly, the creator may lodge a defense of free speech or creative expression, especially if the deepfake victim is a political figure or otherwise in the public spotlight. This may beg the question, to what extent is a deepfake depicting a member of parliament any different from a satirical cartoon or parody? Unless the deepfake is outrageously obscene or incites actual criminal behaviour, it may be nearly impossible to take legal action.

Deepfakes are but one of many instances where the law has not quite kept up with the rapid development of new technology. Although issues like these keep technology lawyers like myself employed, the potential for genuine harm caused by deepfakes in the wrong hands cannot be overstated. It should be fairly clear that outlawing or attempting to ban deepfakes is neither possible nor desirable, but perhaps increased regulation is a viable option. Deepfakes could be watermarked or labelled before being shared by licensed or regulated entities (for example, news organisations) much in the same way that airbrushed models in advertisements are labelled in France. Doing so may at least slow down the proliferation of deepfakes purporting to be genuine.

But until then, the only advice remains that you shouldn’t believe everything you read – or see, or hear – online.

Update 14 June 2019: The European Commission has released a joint Report on the implementation of the Action Plan Against Disinformation – available here.

France vs Russia in media regulator showdown

France vs Russia in media regulator showdown

France’s broadcasting regulator recently issued a warning to the French division of Russian television channel RT for falsifying facts in a programme about the use of chemical weapons in Syria. The following day, the Russian state media regulator accused French television channel France 24 of violating Russian media laws. As relations between western countries and Moscow deteriorate, France nears passing “Fake News” regulation to hit back at RT, while France 24 risks having its operating licenses revoked in Russia.

RT France’s broadcast on Syria

At least 40 people died earlier this year from exposure to chlorine and sarin gas in the Syrian town of Douma. The attack provoked global outrage and Western governments blamed the attack on Syrian President Bashar al-Assad, a Russian ally. Within days, the United States, Britain, and France led retaliatory missile strikes against Assad’s suspected chemical weapons sites.

Several days later, RT France aired a segment entitled “Simulated Attacks” during its evening news programme, which dismissed the chemical weapons attacks as staged. Furthermore, RT France dubbed over the voices of Syrian civilians with words they had not said. The portrayal of the Syrian attack in such a manner may be a violation of its contractual, and regulatory obligations under French law.

omgnews_FRANCE-RUSSIA-INFLUENCE-1021x580
Xenia Federova, President of RT France, reportedly has a direct line to President Putin

A Muscovite in Paris

Bolstered by the popularity of its French language website and YouTube channel, RT took the decision to open a Paris bureau after the Élysée Palace refused to provide RT reporters with press credentials to cover presidential news conferences. Previously, the state-backed broadcaster had been criticized by French President Emmanuel Macron as “behaving like deceitful propaganda” which “produced infamous counter-truths about him.” As a presidential candidate, Macron was targeted by a campaign of fake news and hacking attempts from Russia, and he is reported to have taken the affront personally.

I have decided that we are going to evolve our legal system to protect our democracy from fake news. The freedom of the press is not a special freedom, it is the highest expression of freedom. If we want to protect liberal democracies, we have to be strong and have clear rules.

Emmanuel Macron

Nevertheless, when speaking about the channel prior to its launch, RT France’s president Xenia Fedorova commented: “France is a country with a storied legacy of respect for the freedom of expression and embrace of new ideas. RT France will enable the audiences to explore this diversity and hear the voices rarely found in the mainstream media.”

Conseil supérieur de l’audiovisuel (Audiovisual Council, or CSA) has authority under the French Freedom of Communication Act or “Léotard Act” (loi n° 86-1067) to regulate television programming in France. RT only recently entered the French market in January 2018, and like all broadcasters in the country, operates under a contract with the CSA. In its official notice, CSA stated that the Russian outlet violated its obligations under the contract, namely:

  • article 2-3-1 —journalists, presenters, hosts or programme directors will ensure that they observe an honest presentation of questions relating to controversies and disputed issues
  • article 2-3-6 —The publisher will demonstrate precision in the presentation and treatment of news. It will ensure the balance between the context in which images were taken and the subject that they show [and] cannot distort the initial meaning of the images or words collected, nor mislead the viewer.

CSA went on to claim RT France displayed “failures of honesty, rigor of information, and diversity of points of view.” Furthermore, “there was a marked imbalance in the analysis, which, on a topic as sensitive as this, did not lay out the different points of view.”

Although RT France acknowledged a mistake had been made in the French translation of comments from a Syrian witness, it claimed that this was a “purely technical error” which had been corrected. Rebutting CSA’s complaint, Xenia Fedorova stated, “RT France covers all subjects, including the Syrian conflict, in a totally balanced manner, by giving all sides a chance to comment.”

 

Related image
Not amused: standing alongside Putin, Macron stated at this 2017 conference that “Russia Today did not behave as media organisations and journalists, but as agencies of influence and propaganda, lying propaganda – no more, no less.”

 

A Parisien in Moscow

France 24 broadcasts in English on Russian satellite packages, and has about 1,348,000 weekly viewers. In a statement, Russia’s Federal Service for Supervision of Communications, Information Technology and Mass Media —commonly known as Roskomnadzor identified a violation of media law by France 24 in Russia.

A Russian media source reports that “during an analysis of the licensing agreements in watchdog Roskomnadzor’s possession, it has been established that the editorial activity of France 24 is under the control of a foreign legal entity.”

This would violate Article 19.1 of the Russian Mass Media Law, which was amended in 2016 to restrict foreign ownership of media companies. The law bans foreigners from holding more than a 20 per cent stake in Russian media outlets, effectively forcing them to be controlled by Russian legal entities.

RT’s chief editor Margarita Simonyan said the Roszkomnadzor move was a retaliatory action for CSA’s warning. Speaking to state news agency RIA Novosti, Simonyan explained, “Russia is a big country. Unlike many, we can afford ourselves the luxury of tit-for-tat measures.”

RT is widely acknowledged as the Russian government’s main weapon in an intensifying information war with the West. In respect of media ownership, it is no secret that the Kremlin uses direct ownership to influence publications and the airwaves. Each Russian TV channel is fully or partially owned by the state except for one, NTV. Even so, NTV is owned by Gazprom, the natural gas giant in which the government has a controlling stake.

Because of the constrained political environment, Russian media are unable to resist the pressure from the state and succumbed to the well-known propaganda and conformism pattern according to which they’ve been operating in the Soviet times. The period of the relative freedom of press ended with Vladimir Putin ascension to power, which was too short for the Russian media to become a strong democratic institution.

Index on Censorship

In the wake of alleged Russian interference with American elections and the Brexit referendum, lawmakers now face the challenge of regulating a defiant type of expression. Is this propaganda masquerading as journalism, which should be curtailed or even censored ? Or is RT simply a voice from a different perspective? Should viewers be trusted to make the best decision as the information wars carry on?

In France at least, the road to regulation seems to be preferred. After fierce debate, the French Parliament approved draft legislation to allow courts to determine whether articles published within three months of elections are credible, or should be taken down.