Forging Authenticity: Experts’ workshop on Deepfake Technology, Risks and Governance

Forging Authenticity: Experts’ workshop on Deepfake Technology, Risks and Governance

In September, I had the privilege of attending the Swiss Re Centre for Global Governance in Zürich, Switzerland for a two-day conference on deepfakes.

The conference was hosted by the International Risk Governance Center (IRGC), whose objective is to better understand emerging and systemic risks, as well as the governance of opportunities and risks associated with new technologies.  Because the conference was subject to the Chatham House Rule and a paper from the event is forthcoming, I can’t go into too much detail. However, I thought it might be nice to set out in broad terms the topics of discussion, and a few comments on what I found most interesting…. as well share some photos!

1

SUNDAY.

As the conference was scheduled for Monday and Tuesday, I flew out to Switzerland on the Sunday evening. In retrospect this was a very good decision, because the British Airways strike began the following day, and I was pleased to not have to worry about travel problems!

2
Dinner in the departure lounge! I had the chance to review some notes and watch the planes take off from London City Airport, over a nice glass of wine.
3
Transport in Zürich was super smooth. My plane landed at 22:47, and I was through security (with a stamp!) by 23:07. I was on the train by 23:15, which arrived at the Zürich Hauptbahnhof (Central Train Station) by 23:30. My hotel was just a few minutes’ walk from there!

MONDAY.

I woke up bright and early for the chance to have a morning walk through central Zürich, having never been to the city before. My hotel was right on the shore of the Zürichsee (Lake Zürich) and I had a really nice croissant for breakfast at Confiserie Sprüngli! I walked along Bahnhofstrasse, which  is Zürich’s main downtown street and also one of the world’s most expensive and exclusive shopping avenues. I was also feeling somewhat nervous about the conference – in a good way, of course! – so stepping out into the fresh autumnal air was a nice way to mentally prepare for the day ahead.

4

5

SWISS RE CENTRE FOR GLOBAL DIALOGUE

For those of you who might not know, Swiss Re is the world’s second largest reinsurance company.  Their Centre for Global Dialogue is located just outside of Zürich, with breathtaking views of Lake Zürich and the Alps. From my hotel room (pictured below), I could even see the lake!

r1

r2
my press photo! 😊

Image result for swiss re centre for global dialogue

c1.jpg
Getting settled in at the conference!

SETTING THE SCENE: DEEPFAKE TECHNOLOGY

The conference itself began in the early afternoon with a panel discussion by experts from IBM Watson, California-based tech company NVIDIA, and IDIAP, which is the research arm of the Ecole Polytechnique Fédérale de Lausanne. In this technology session, we discussed:

  • The technologies that have enabled deepfake creation and distribution.
  • The plausible trajectory for these technologies, and what the deepfake ecosystem might look like in five years.
  • Promising technologies for countering deepfakes, and what research advances might help reduce risks
  • Whether or not there are “information hazards” arguments for restricting access to research in this area, to prevent its use for malicious purposes.

Three things from this session really stuck in my mind. Firstly, the reminder that from a security standpoint, humans really are the biggest risk to any technological system. In particular, burnout poses a challenge, because we cannot stay hyper-vigilant at all times. Secondly, I found it interesting to note that detection is unlikely to be a winnable arms race. Watermarking and fingerprinting are good ideas in theory, but it would be difficult to create workable solutions. By way of example, if we require watermarks for certain media, would a lack of a watermark indicate that it’s a fake? Watermarks can be easily removed or added. The general consensus of the group was that the biggest risk posed by deepfakes is the degradation of standard notions of trust.

DEEPFAKE RISKS & VULNERABILITIES

The second session was led by representatives from Zurich Insurance, the French banking giant BNP Paribas, and the Swiss Federal Institute of Technology in Zurich. In this session covering various deepfake risks, we focused on the following points:

  • Are there reasons to worry more about deepfakes than about the other forms of deception and manipulation we’ve used throughout history?
  • Who or what is most at risk of harm: individuals, businesses, public institutions, or society at large?
  • What kinds of harm are of greatest concern? Harms could include fabricated evidence (such as insurance claims or judicial evidence), reputational damage, abuse/intimidation/extortion, manipulation of public opinion (including elections), and market manipulation.
  • Are there beneficial uses of deepfake technologies that need to be excluded from regulatory interference?

The key point of this discussion concerned the slippery slope between risk minimisation on the one hand, and the protection of certain liberties and economic freedoms on the other. It’s important to note that traditionally, threats posed by technology have been used to force through pernicious changes in the law or government surveillance. Just think back to post-9/11 USA PATRIOT Act, or even more recently to the UK’s Snoopers CharterTo minimise deepfake risks, we could certainly utilise certain forms of data monitoring, profiling and censorship, but to what end?         

DINNER AT THE VILLA

After an intense day of discussions and debate, we headed across the courtyard of the Centre to this beautiful Villa for drinks and dinner.

v1
Dinner was held at this stunning Villa, located on the same grounds as the Centre.
meme
gratuitous selfie
v3
Drinks and discussions about deepfakes – what a great combination!

TUESDAY.

t2
Tuesday morning’s sunrise view from my room!

LEGAL & REGULATORY RESPONSES

This session, covering things from a legal and regulatory perspective, was probably my favourite. It was also special for me, because it was my first time ever moderating a panel discussion! We discussed what existing laws/regulations can be applied to problematic deepfakes: for example, those concerning fraud, privacy, defamation, stalking, and electoral law.

Legislatures in the United States as well as the United Kingdom have for several years now sought to address online sexual harassment, with numerous jurisdictions criminalising so-called “revenge porn”. Given their initial popularity as manipulated pornographic videos, it seems only reasonable that some lawmakers have proposed specific bans of deepfakes which show obscene sexual activity. Furthermore, as of September 2019 Texas became the first state to criminalise deepfake videos made with intent to injure a political candidate or influence an election.

But are these legal instruments sufficient to address deepfake risks, or are new laws needed? In addition to the above, we also discussed:

  • The potential impact of deepfakes on the legal/judicial system, for example in terms of admissibility of audio/video evidence.
  • Whether or not there there is any need for – or prospect of – converging responses to deepfakes in different jurisdictions.

BREAK-OUT GROUP: CORPORATE 

After our coffee break on Tuesday, we divided up into smaller groups. I chose the Corporate and Insurance group – and I’m so glad I did, because I learned so much! Our main discussion focused on the potential financial risks to companies, investors, and markets more generally. Such risks could include fraud against customers, to deepfakes designed to manipulate company stock prices or whole markets. From an insurance perspective, we discussed whether deepfake technologies create new challenges for the insurance, in terms of vulnerability to fraudulent claims.

You may be wondering why the insurance industry cares about manipulated videos. In essence, it comes back to the point above about truth and trust. Today, many insurance claims can be supported through online evidence submissions: take, for example, a photograph of your car after someone rear-ends it. If insurance fraud goes up through the use of deepfakes – despite detection software – this increased risk will be transfers to the insured, and the premium will be raised. Without a doubt, we are living in a data-driven world, as insurance is gathering more and more data about activities connected to the policies. There is an ever-growing amount of data available thanks to the Internet of Things (IoT), credit checking websites, and public information: it’s easy to imagine the ways that deepfakes could threaten that stability.

t1.jpg

IMG_20190910_120912
After our break-out group sessions, we enjoyed a really (really!) nice gourmet buffet. Not pictured: me, chatting with really lovely and super smart people!

WHAT NEXT?

There’s only so much a group of lawyers, insurers, and computer scientists can cover in two days. In our final session, we discussed the questions that are likely to be unanswered… at least, for now.

  • What are the potential societal implications of deepfakes, in terms of levels of trust, standards of truth, and electoral manipulation?
  • What is the value of trust in the digital age?
  • What role do “technologies of trust” have in response to the decline of older norms and patterns of social trust?
  • At an individual and societal level, can anything be done to reduce the viral sharing of false and harmful content?
  • What are the immediate priorities — what decisions could be taken now to improve incentives around content authenticity and integrity?
  • Are there wider lessons to be learned from the deepfake phenomenon about the governance of emerging technologies?

There were so many insightful and thought-provoking moments during this conference. In conclusion, I just have to wonder if maybe we have taken “easy evidence” for granted. Was this technological evolution inevitable? Will the rise of the deepfake require us to place more faith in non-recorded instances of trust and truth, such as eye-witness reports? Perhaps the special privilege we have afforded to video — to digital truth — is ending.

Related image

DeepFakes and False Lights: what does the law say?

DeepFakes and False Lights: what does the law say?

What do Scarlett Johansson, cyber intelligence experts and some law makers have in common? Their shared concern about AI-generated videos. Known as “DeepFakes,” these videos can have damaging impact on reputations, emotional health, and even national security. But what is the legal status of this disruptive – and oftentimes disturbing – technology?

Deepfake – which combines “deep learning” and “fake” – is commonly defined as an artificial intelligence-based human image synthesis technique. Put simply, it’s a way to superimpose one face over another.

In December 2017, an anonymous Reddit user started a viral phenomenon by combining the machine learning software and AI to swap porn performers’ faces with those of famous actresses. Scarlett Johansson, one of the most highly-paid actresses in Hollywood, has herself been the victim of such “creations”. Speaking to the Washington Postshe explained that “nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired. There are basically no rules on the internet because it is an abyss that remains virtually lawless.”

It goes without saying that such fake porn videos can easily damage careers, emotional well-being, and a person’s sense of dignity and self-esteem. But there are other implications, too.

As a general starting point, it’s useful to have an understanding of what AI is – and isn’t. “Artificial Intelligence” is not another word for the robot overlords in Blade Runner or even Skynet’s Terminators. Rather, AI is fundamentally a machine-learning application whereby a computer is to fulfill a certain task on its own. What makes AI special is that machines are essentially “taught” to complete tasks that were previously done by humans, by doing the task over and over again.

With deepfakes, it doesn’t take long for the AI to learn the skill with eerie precision, and produce sophisticated (albeit artificial) images. The technology has many legitimate uses, especially in the film industry, where an actor’s face can be placed on their stunt double’s body. But thanks to continued advancement in the technology itself, the political and legal risks are higher than ever before.

On 29 January, US Director of National Intelligence Dan Coates spoke before the Senate Select Committee on Intelligence to deliver the Worldwide Threat Assessment, which had been compiled by the US intelligence community. The document sets out the biggest global threats in the following order: cyber, online influence operations (including election interference), weapons of mass destruction, terrorism, counterintelligence, emerging and disruptive technologies. 

Yes, cyber attacks and online influence operations are discussed before traditional weapons of mass destruction. The report even mentions deepfakes explicitly:

Adversaries and strategic competitors probably will attempt to use deep fakes or similar machine-learning technologies to create convincing—but false—image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners.

Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, explained that “we already struggle to track and combat interference efforts and other malign activities on social media — and the explosion of deep fake videos is going to make that even harder.” This is particularly relevant given the severe political polarization around the world today: from Brexit to Trump and everywhere in between, deepfakes could become powerful ways to spread more disinformation and distrust.

There are some legal remedies which may combat some of the more nefarious aspects of the deepfake. As explained by the International Association of Privacy Professionals (IAPP), in common law jurisdictions like the United States and the United Kingdom, the victim of a deepfake creation may be able to sue the deepfake’s creator under one of the privacy torts. By way of example, the false light tort requires a claimant to prove that the deepfake in question incorrectly represents the claimant, in a way that would be embarrassing or offensive to the average person.

Another potentially relevant privacy tort is that of misappropriation or the right of publicity, if the deepfake is used for commercial purposes. Consider, for example, if someone made a deepfake commercial of Meghan, the Duchess of Sussex endorsing a certain makeup brand. Since individuals generally do not own the copyright interest in their own image (i.e., the photograph or video used to make a deepfake) copyright law is not a good remedy to rely upon. Instead, Meghan could argue that the deepfake misappropriated her personality and reputation for someone else’s unauthorised commercial advantage. However, it’s important to note that personality rights are frustratingly nebulous here in the United Kingdom, as I explained in Fame and Fortune: how celebrities can protect their image

Depending on the nature of the deepfake, a victim may also be able to sue for the intentional infliction of emotional distress, cyberbullying, or even sexual harassment. But in many instances, the burden of proof to establish these claims can be a notoriously difficult standard to meet.

Furthermore, the practical challenges of suing the creator of a deepfake are considerable. Firstly, such creators are often anonymous or located in another jurisdiction, which makes legal enforcement very difficult. Although a victim could request that the creator’s internet company (ISP) remove the deepfake, establishing what is known as “online intermediary liability” and forcing an ISP to get involved can be an uphill battle in and of itself (this was the topic of one of my papers in law school). As for the victim exercising their right to be forgotten under the EU’s General Data Protection Regulation (Article 17, GDPR), the same problem arises: who is responsible for taking down the deepfake?

Secondly, the creator may lodge a defense of free speech or creative expression, especially if the deepfake victim is a political figure or otherwise in the public spotlight. This may beg the question, to what extent is a deepfake depicting a member of parliament any different from a satirical cartoon or parody? Unless the deepfake is outrageously obscene or incites actual criminal behaviour, it may be nearly impossible to take legal action.

Deepfakes are but one of many instances where the law has not quite kept up with the rapid development of new technology. Although issues like these keep technology lawyers like myself employed, the potential for genuine harm caused by deepfakes in the wrong hands cannot be overstated. It should be fairly clear that outlawing or attempting to ban deepfakes is neither possible nor desirable, but perhaps increased regulation is a viable option. Deepfakes could be watermarked or labelled before being shared by licensed or regulated entities (for example, news organisations) much in the same way that airbrushed models in advertisements are labelled in France. Doing so may at least slow down the proliferation of deepfakes purporting to be genuine.

But until then, the only advice remains that you shouldn’t believe everything you read – or see, or hear – online.

Update 14 June 2019: The European Commission has released a joint Report on the implementation of the Action Plan Against Disinformation – available here.