DeepFakes and False Lights: what does the law say?

DeepFakes and False Lights: what does the law say?

What do Scarlett Johansson, cyber intelligence experts and some law makers have in common? Their shared concern about AI-generated videos. Known as “DeepFakes,” these videos can have damaging impact on reputations, emotional health, and even national security. But what is the legal status of this disruptive – and oftentimes disturbing – technology?

Deepfake – which combines “deep learning” and “fake” – is commonly defined as an artificial intelligence-based human image synthesis technique. Put simply, it’s a way to superimpose one face over another.

In December 2017, an anonymous Reddit user started a viral phenomenon by combining the machine learning software and AI to swap porn performers’ faces with those of famous actresses. Scarlett Johansson, one of the most highly-paid actresses in Hollywood, has herself been the victim of such “creations”. Speaking to the Washington Postshe explained that “nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired. There are basically no rules on the internet because it is an abyss that remains virtually lawless.”

It goes without saying that such fake porn videos can easily damage careers, emotional well-being, and a person’s sense of dignity and self-esteem. But there are other implications, too.

As a general starting point, it’s useful to have an understanding of what AI is – and isn’t. “Artificial Intelligence” is not another word for the robot overlords in Blade Runner or even Skynet’s Terminators. Rather, AI is fundamentally a machine-learning application whereby a computer is to fulfill a certain task on its own. What makes AI special is that machines are essentially “taught” to complete tasks that were previously done by humans, by doing the task over and over again.

With deepfakes, it doesn’t take long for the AI to learn the skill with eerie precision, and produce sophisticated (albeit artificial) images. The technology has many legitimate uses, especially in the film industry, where an actor’s face can be placed on their stunt double’s body. But thanks to continued advancement in the technology itself, the political and legal risks are higher than ever before.

On 29 January, US Director of National Intelligence Dan Coates spoke before the Senate Select Committee on Intelligence to deliver the Worldwide Threat Assessment, which had been compiled by the US intelligence community. The document sets out the biggest global threats in the following order: cyber, online influence operations (including election interference), weapons of mass destruction, terrorism, counterintelligence, emerging and disruptive technologies. 

Yes, cyber attacks and online influence operations are discussed before traditional weapons of mass destruction. The report even mentions deepfakes explicitly:

Adversaries and strategic competitors probably will attempt to use deep fakes or similar machine-learning technologies to create convincing—but false—image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners.

Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, explained that “we already struggle to track and combat interference efforts and other malign activities on social media — and the explosion of deep fake videos is going to make that even harder.” This is particularly relevant given the severe political polarization around the world today: from Brexit to Trump and everywhere in between, deepfakes could become powerful ways to spread more disinformation and distrust.

There are some legal remedies which may combat some of the more nefarious aspects of the deepfake. As explained by the International Association of Privacy Professionals (IAPP), in common law jurisdictions like the United States and the United Kingdom, the victim of a deepfake creation may be able to sue the deepfake’s creator under one of the privacy torts. By way of example, the false light tort requires a claimant to prove that the deepfake in question incorrectly represents the claimant, in a way that would be embarrassing or offensive to the average person.

Another potentially relevant privacy tort is that of misappropriation or the right of publicity, if the deepfake is used for commercial purposes. Consider, for example, if someone made a deepfake commercial of Meghan, the Duchess of Sussex endorsing a certain makeup brand. Since individuals generally do not own the copyright interest in their own image (i.e., the photograph or video used to make a deepfake) copyright law is not a good remedy to rely upon. Instead, Meghan could argue that the deepfake misappropriated her personality and reputation for someone else’s unauthorised commercial advantage. However, it’s important to note that personality rights are frustratingly nebulous here in the United Kingdom, as I explained in Fame and Fortune: how celebrities can protect their image

Depending on the nature of the deepfake, a victim may also be able to sue for the intentional infliction of emotional distress, cyberbullying, or even sexual harassment. But in many instances, the burden of proof to establish these claims can be a notoriously difficult standard to meet.

Furthermore, the practical challenges of suing the creator of a deepfake are considerable. Firstly, such creators are often anonymous or located in another jurisdiction, which makes legal enforcement very difficult. Although a victim could request that the creator’s internet company (ISP) remove the deepfake, establishing what is known as “online intermediary liability” and forcing an ISP to get involved can be an uphill battle in and of itself (this was the topic of one of my papers in law school). As for the victim exercising their right to be forgotten under the EU’s General Data Protection Regulation (Article 17, GDPR), the same problem arises: who is responsible for taking down the deepfake?

Secondly, the creator may lodge a defense of free speech or creative expression, especially if the deepfake victim is a political figure or otherwise in the public spotlight. This may beg the question, to what extent is a deepfake depicting a member of parliament any different from a satirical cartoon or parody? Unless the deepfake is outrageously obscene or incites actual criminal behaviour, it may be nearly impossible to take legal action.

Deepfakes are but one of many instances where the law has not quite kept up with the rapid development of new technology. Although issues like these keep technology lawyers like myself employed, the potential for genuine harm caused by deepfakes in the wrong hands cannot be overstated. It should be fairly clear that outlawing or attempting to ban deepfakes is neither possible nor desirable, but perhaps increased regulation is a viable option. Deepfakes could be watermarked or labelled before being shared by licensed or regulated entities (for example, news organisations) much in the same way that airbrushed models in advertisements are labelled in France. Doing so may at least slow down the proliferation of deepfakes purporting to be genuine.

But until then, the only advice remains that you shouldn’t believe everything you read – or see, or hear – online.

 

Privacy Day 2019

Privacy Day 2019

In 2006 the Council of Europe officially recognised 28 January as a data privacy holiday, to celebrate the date The Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data was signed in 1981. Also known as Convention 108, this document remains the only international treaty in the field of personal data protection.

In honour of this year’s Privacy Day – also called Data Protection Day – here are a few excerpts from some of my favourite English and American legal cases about privacy.

Image result for entick v carrington

In 1762, the King George IV’s Chief Messenger Nathan Carrington and others broke into the home of the writer John Entick. Over the course of four hours, the messengers broke open locks and doors and searched all of the rooms, before taking away charts and pamphlets, and causing £2,000 of damage. The King’s messengers were acting on the orders of Lord Halifax, the newly appointed Secretary of State: Entick later sued Carrington for trespassing on his land. In his judgment in favour of Entick, Chief Justice of the Common Pleas Lord Camden wrote:

Has a Secretary of State a right to see all a man’s private letters of correspondence, family concerns, trade and business? This would be monstrous indeed; and if it were lawful, no man could endure to live in this country.

Today, Entick v Carrington is considered to have deeply influenced the establishment of individual civil liberties, and limiting the scope of executive power. It also served as an important motivation for the Fourth Amendment to the United States Constitution, which guarantees protections to Americans against certain searches and seizures. 

Image result for queen victoria sketches

Prince Albert v Strange was an 1849 court decision which began the development of confidence law, the common law tort that protects private information. By way of background, both Queen Victoria and Prince Albert sketched as a hobby. John Strange obtained some of these sketches after they had been stolen from Windsor Palace, and published a catalog showing them. Prince Albert filed suit for the return of the sketches, and a surrender of the catalog for destruction. The Lord Chancellor Lord Cottenham granted Prince Albert’s plea, and explained in his judgment that:

The Court of Chancery will protect everyone in the free and innocent use of his own property, and will prevent other parties from interfering with the use of that property, so as to injure the owner. It is certain every man has a right to keep his own sentiments if he pleases. He has certainly a right to judge whether he will make them public, or commit them only to the sight of his friends. Privacy is a part, and an essential part, of this species of property.

 

Image result for Eisenstadt v Baird

In 1967, William Baird was charged with a felony for handing a condom to an unmarried woman who had attended one of his lectures on birth control at Boston University. Under Massachusetts law on “Crimes against chastity”, contraceptives could only be distributed by registered doctors or pharmacists, and only to married persons. The Supreme Court of the United States overturned the law in the 1972 case Eisenstadt v. Baird, and the majority opinion was written by Justice Brennan, who famously wrote:

If the right of privacy means anything, it is the right of the individual, married or single, to be free from unwarranted governmental intrusion into matters so fundamentally affecting a person as the decision whether to bear or beget a child.

In 1982, the state of Pennsylvania enacted legislation that placed a number of restrictions on abortion. In the resulting 1986 case Thornburgh v. American College of Obstetricians and Gynecologists, the Supreme Court overturned the Pennsylvania law, holding (amongst other things) that the “informed consent” and printed materials provisions of the law unduly intruded upon the privacy of patients and physicians. Justice Brennan penned the opinion, noting:

Our cases long have recognized that the Constitution embodies a promise that a certain private sphere of individual liberty will be kept largely beyond the reach of government. Few decisions are more personal and intimate, more properly private, or more basic to individual dignity and autonomy, than a woman’s decision whether to end her pregnancy. A woman’s right to make that choice freely is fundamental. Any other result, in our view, would protect inadequately a central part of the sphere of liberty that our law guarantees equally to all. 

Image result for naomi campbell magazine 1994

In 2001, British supermodel Naomi Campbell was photographed leaving a drug rehabilitation clinic, despite having previously denied that she was a recovering drug addict. After the photographs were published in the tabloid The Mirror, Campbell sued for damages in Naomi Campbell v Mirror Group Newspapers. The House of Lords held the paper liable, and Law Lord Nicholls stated:

The importance of freedom of expression has been stressed often and eloquently, the importance of privacy less so. But it, too, lies at the heart of liberty in a modern state. A proper degree of privacy is essential for the well-being and development of an individual. And restraints imposed on government to pry into the lives of the citizen go to the essence of a democratic state.

In the 2011 case of Federal Aviation Administration v. Cooper, the Supreme Court considered if the United States Privacy Act of 1974 covers mental and emotional distress caused by privacy invasion. The Court held that the Privacy Act’s “actual damages” provision only allowed Cooper to recover for proven pecuniary or economic harm. Justice Sonia Sotomayor wrote the dissent, joined by Justices Ruth Bader Ginsburg and Stephen Breyer. Perhaps unsurprisingly, I personally agree with Justice Sotomayor’s dissent, which noted:

Nowhere in the Privacy Act does Congress so much as hint that it views a $5 hit to the pocketbook as more worthy of remedy than debilitating mental distress, and the contrary assumption [in this case] discounts the gravity of emotional harm caused by an invasion of the personal integrity that privacy protects.

Of course, the cases above provide only a small glimmer of insight into the weird and wonderful world of privacy law. On international Privacy Day in particular, it’s important to remember that the legislation and court cases which shape our understanding of privacy and protection from intrusion go far beyond the modern notion of cyber security.

The right to privacy is a human right!

Related image

Do Neo-Nazis have a right to privacy?

Do Neo-Nazis have a right to privacy?

Earlier this month, a leftist art collective in Germany called the Centre for Political Beauty (Zentrum für Politische Schönheit or “ZPS”) launched a website to name and shame neo-Nazis. At soko-chemnitz.de, people were invited to examine photographs taken during this summer’s violent anti-immigration protests in Chemnitz, and in exchange for identifying suspected right-wing demonstrators, would receive a crowd-funded reward of at least €30. The twist? The image recognition database was a honeypot: a sophisticated hoax to induce neo-Nazis into identifying themselves.

This recent project gives rise to serious questions regarding the exploitation of personal data for illegitimate or unlawful purposes – even if those purposes are seen by many as socially or ethically justified.

Image result for center of political beauty
“Doxing” – a portmanteau of document (“dox”) and dropping – is a term used to describe publicly exposing someone’s real identity on the internet.

The Chemnitz Context

Known as Karl-Marx Stadt when it was part of the Soviet bloc, Chemnitz is an industrial city in eastern Germany with a population of about 250,000. After German reunification in 1990, the political and economic systems changed drastically as democracy and capitalism replaced the communist regime. Similarly, as thousands of East Germans relocated to the more prosperous West, expatriates and immigrants filled shortages in the labour market and made their home in East Germany. For the first time in decades, the East was forced to deal with the challenges posed by multiculturalism, immigration and globalism.

Such problems have only intensified in light of Chancellor Merkel’s more liberal migrant policy, which has seen an influx of those seeking asylum and refugee status. Accordingly, Eastern Germany has seen a significant surge in far-right populism and xenophobic protests. In 2017, nearly 25 per cent of the city’s residents voted for the far-right German nationalist party, Alternative for Germany (Alternative für Deutschland, orAfD”).

Tensions between “native” East Germans and immigrants made headlines again this August, when a German man was stabbed to death in Chemnitz. When police revealed that his two attackers were Kurdish (one from Iraq and the other Syria) far-right groups quickly organised anti-immigration protests. Nearly 7,000 people joined the demonstrations, which were marked by hate speech and violence against non-Germans. The swastika and other Nazi symbols, including making the Nazi salute, are banned in Germany.

The Honeypot

Known for its “activist art”, the ZPS uses satirical stunts, performance pieces and interventions to draw attention to various humanitarian issues. By way of example, the group designed a monument in 2010 to “memorialise” Western co-responsibility for the Srebrenica massacre. In 2017, they built a “Holocaust Memorial” in front of nationalist politician Björn Höcke’s house.

In the weeks following the Chemnitz protests, ZPS published pictures of far-right rioters online at soko-chemnitz.de, and asked visitors to “identify and denounce your work colleagues, neighbors or acquaintances today and collect instant cash!” The rewards started at €34 (£30) with special bonuses awarded for identifying photos of people who were police, or members of Germany’s domestic security agency, the Federal Office for the Protection of the Constitution (Bundesamt für Verfassungsschutz or BfV). While the ZPS had indeed previously identified over 1,500 individuals who participated in the protest, the real goal of the campaign was to get far-right sympathizers to search for and thereby name themselves.

Related image
Gesucht: Wo arbeiten diese Idioten? / Wanted: where do these idiots work?

The honeypot design was simple. When visitors entered the website, they were presented with only 20 pictures at a time. Much to the delight of ZPS, Chemnitz protesters went straight to the site’s search bar to type in their own name and the names of fellow participants, to see if they’d already been named. The average visitor searched for the names of seven people.

In this way, the protesters “delivered their own entire network to ZPS without realising it. They told us more about themselves than publicly available sources ever betrayed.” ZPS founder Philipp Ruch claims that use of the website has created “the most relevant set of data on right-wing extremism that currently exists in Germany.”

The Controversy

The Special Commission Chemnitz site sparked a huge controversy in Germany for several reasons. Firstly, many questioned the legality of the website itself. Photos of demonstrators were uploaded without permission from the individuals pictured, an action which could potentially contravene German and European data protection law. Although no such private information other than photographs were revealed on soko-chemnitz.de,  users were asked to send in names, addresses, and names of employers of demonstrators. DeutscheWelle, Germany’s public international broadcaster, reported that “Germany’s data protection commissioner’s office said it was looking into whether the ZPS site was acting within legal limits.”

Image result for center for political beauty
Members of the ZPS always wear black face paint during during public appearances, to symbolize the “soot of German history”. The group’s fundamental mission statement is that “the legacy of the Holocaust is rendered void by political apathy, the rejection of refugees and cowardice. It believes that Germany should not only learn from its History but also take action.”

Beyond the textual or purely legalistic overtures of data protection law violations, the website elicits serious concerns over whether doxing private individuals is ever justified. Much has been written about the free speech rights of those who promote abhorrent ideologies. Those with a more libertarian perspective on free speech will insist that Nazi speech must be defended because it is so especially controversial. But what about the right to privacy?

In his article entitled Why it’s important to name the Nazis, journalist David Perry argued that identifying those whose pictures appear online attending a public rally is justified. Neo-Nazi protesters are people intending to do or to advocate harm, and have therefore surrendered their right to anonymity. The right to freedom of expression does not extend to a right of social impunity. One could also consider that view that as such protests occurred in a public space, any reasonable expectation of privacy was materially lacking.

But in the European —and notably, German— context, rights to privacy are especially treasured given the history of both Nazi and Communist security service tactics. These regimes demonstrated in the most heinous ways possible that collection of personal information can lead to harm. The idea of encouraging and paying private individuals to “out” their friends, neighbours and colleagues —even if for a seemingly noble cause—does not sit well with many Europeans today. Interior Minister Roland Wöller went so far as to say that the ZPS website “endangered social cohesion”.

Consider the distinction between how the United States and Germany “name and shame” sex offenders. The United States was the first country to establish a national sex offender registration and notification system in 1994. By contrast, Germany has no national sex offender registration legislation, nor a public notification system. This perhaps illustrates the extent to which Germans value the protection of individual privacy, even where those individuals have committed criminal or otherwise morally reprehensible acts.

The soko-chemnitz.de project forces upon the public an uncomfortable question: do neo-Nazis have a right to privacy? Those who say “no” would likely choose to identify and denounce the Chemnitz protesters as potentially dangerous far-right radicals. In so doing, one could take comfort in having participated in some sort of righteous, anti-Nazi resistance movement. But at what cost? Doxing campaigns have gone terribly wrong in the past, and errors in identification can led to irreparable emotional and reputation damage, or even job loss and suicide. On the other hand, refusing to participate in the campaign could arouse suspicions that one sympathizes or even identifies with the Nazi ideology.

As a piece of political performance art, soko-chemnitz.de was certainly provocative. But it is also politically significant. Coverage of the website forced people to consider their own personal prioritisation of ideals associated with a democratic society: to what extent should we protect privacy, expression, freedom from interference, security, liberty, trust…? It’s a predicament as old as political philosophy itself, and an increasingly uncomfortable balancing act to achieve in today’s world of hyper-surveillance and social media. Perhaps this was the disquieting, satirical reminder the ZPS was hoping to convey all along.

 


*Note on soko-chemnitz.de

ZPS has replaced its original soko-chemnitz website with a splash page explaining the honeypot campaign. You can visit earlier archives of the page using the Wayback Machine. This is what the website looked like on 4 December 2018, absent the images of individuals, which have since been deleted.

Facebook and Privacy: cases, reports and actions in Europe

Facebook and Privacy: cases, reports and actions in Europe

A list of European enforcement action, official legislative (Parliamentary) reports, and cases concerning Facebook with respect to data protection and privacy. This is a work in progress, last updated November 2018.

Data Protection Commissioner (Ireland) v Facebook Ireland Limited, Maximillian Schrems [Case C-311/18]

  • Jurisdiction: European Union, Ireland
  • Status: Case still in progress
  • Authority:  Court of Justice of the European Union
  • Keywords: EU Data Protection Directive (95/46/EC); EU/US Privacy Shield; Fundamental Rights

Continue reading “Facebook and Privacy: cases, reports and actions in Europe”

Transatlantic Data Transfers: US-EU Privacy Shield under review

When personal data travels between Europe and America, it must cross international borders lawfully. If certain conditions are met, companies can rely on the US-EU Privacy Shield, which functions as a sort of “tourist visa” for data. 

Earlier this week (19 November) the United States Federal Trade Commission finalised settlements with four companies that the agency accused of falsely claiming to be certified under the US-EU Privacy Shield framework. This news closely follows the highly anticipated second annual joint review of the controversial data transfer mechanism. 

IDmission LLC, mResource LLC, SmartStart Employment Screening Inc., and VenPath Inc. were slapped on the wrist by the FTC over allegations that they misrepresented their certification. But this is just the latest saga in an on-going debate regarding the Privacy Shield’s fitness for purpose. Only this summer, the European Parliament urged the European Commission to suspend the Privacy Shield programme over security and privacy concerns.

flying airplane

Background and purpose

Designed by the United States Department of Commerce and the European Commission, the Privacy Shield is one of several mechanisms in which personal data can be sent and shared between entities in the EU and the United States. The Privacy Shield framework thereby protects the fundamental digital rights of individuals who are in European Union, whilst encouraging transatlantic commerce.

This is particularly important given that the United States has no single, comprehensive law regulating the collection, use and security of personal data. Rather, the US uses a patchwork system of federal and state laws, together with industry best practice. At present, the United States as a collective jurisdiction fails to meet the data protection requirements established by EU lawmakers.

As such, should a corporate entity or organisations wish to receive European personal data, it must bring itself in line with EU regulatory standards, known as being “protected under” the Privacy Shield. To qualify, companies must self-certify annually that they meet the requirements set out by EU law. This includes taking measures such as displaying privacy policy on their website, replying promptly to any complaints, providing transparency about how personal data is used, and ensuring stronger protection of personal data.

Today, more than 3,000 American organisations are authorised to receive European data, including Facebook, Google, Microsoft, Twitter, Amazon, Boeing, and Starbucks. A full list of Privacy Shield participants can be found on the privacyshield.gov website.

Complaints and non-compliance?

There is no non-compliance. We are fully compliant. As we’ve told the Europeans, we really don’t want to discuss this any further.

—Gordon Sondland, American ambassador to the EU

Although the Privacy Shield imposes stronger obligations than its ancestor, the now-obsolete “Safe Harbor,” European lawmakers have argued that “the arrangement does not provide the adequate level of protection required by Union data protection law and the EU Charter as interpreted by the European Court of Justice.”

In its motion to reconsider the adequacy of the Privacy Shield, the EU Parliament stated that “unless the US is fully compliant by 1 September 2018” the EU Commission would be called upon to “suspend the Privacy Shield until the US authorities comply with its terms.” The American ambassador to the EU, Gordon Sondland, responded to the criticisms, explaining: “There is no non-compliance. We are fully compliant. As we’ve told the Europeans, we really don’t want to discuss this any further.”

Věra Jourová, a Czech politician and lawyer who serves as the European Commissioner for Justice, Consumers and Gender Equality, expressed a different view: “We have a list of things which needs to be done on the American side” regarding the upcoming review of the international data transfer deal. “And when we see them done, we can say we can continue.”

Photo: Ambassador Sondland with Commissioner Jourova in the Berlaymont.
Jourová and Sondland, via a tweet from Sondland saying he was “looking forward to our close cooperation on privacy and consumer rights issues that are important to citizens on both sides of the Atlantic.” 

The list from the Parliament and the First Annual Joint Review [WP29/255] (.pdf) concerns institutional, commercial, and national security aspects of data privacy, including:

  • American surveillance powers and use of personal data for national security purposes and mass surveillance. In particular, the EU is unhappy with America’s re-authorisation of section 702 of the Foreign Intelligence Surveillance Act (FISA), which authorises government collection of foreign intelligence from non-Americans located outside the United States (Remember Edward Snowden and PRISM? See the Electronic Fronteir Foundation’s explanation here)
  • Lack of auditing or other forms of effective regulatory oversight to ensure whether certified companies actually comply with the Privacy Shield provisions
  • Lack of guidance and information made available for companies
  • Facebook and the Cambridge Analytica scandal, given that 2.7 million EU citizens were among those whose data was improperly used. The EU Parliament stated it is “seriously concerned about the change in the terms of service” for Facebook
  • Persisting weaknesses regarding the respect of fundamental rights of European data subjects, including lack of effective remedies in US law for EU citizens whose personal data is transferred to the United States
  • The Clarifying Overseas Use of Data (“CLOUD”) Act signed into law in March 2018 allows US law enforcement authorities to compel production of communications data, even if they are stored outside the United States
  • Uncertain outcomes regarding pending litigation currently before European courts, including Schrems II and La Quadrature du Net and Others v Commission.

 

Image result for max schrems
Max Schrems is an Austrian lawyer and privacy activist. In 2011 (at the age of 25) while studying abroad at Santa Clara University in Silicon Valley, Schrems decided to write his term paper on Facebook’s lack of awareness of European privacy law. His activism led to the replacement of the Safe Harbor system by the Privacy Shield.

What happens if the Privacy Shield is suspended?

In a joint press release last month, the representatives from the EU and USA together reaffirmed “the need for strong privacy enforcement to protect our citizens and ensure trust in the digital economy.” But that may be easier said than done.

In the event that the Privacy Shield is suspended, entities transferring European personal data to the United States will need to consider implementing alternative compliant transfer mechanisms, which could include the use of Binding Corporate Rules, Model Clauses, or establishing European subsidiaries. To ensure that the American data importer implements an efficient and compliant arrangement, such alternatives would need to be assessed on a case-by-case basis involving careful review of data flows, and the controller and processors involved.

Regardless of the method used to transfer data, American companies must ensure that they receive, store, or otherwise use European personal data only where lawfully permitted to do so. The joint statement noted above concluded by saying that the “U.S. and EU officials will continue to work closely together to ensure the framework functions as intended, including on commercial and national-security related matters.”

The European Commission is currently analysing information gathered from its American counterparts, and will publish its conclusions in a report before the end of the year.

NDAs and the Sound of Silence

NDAs and the Sound of Silence

“When truth is replaced by silence, the silence is a lie.” 
Yevgeny Yevtushenko

The #MeToo movement has brought Non-Disclosure Agreements (NDAs) as a way to silence allegations of sexual harassment into the public debate.  In light of controversies surrounding Donald Trump, Harvey Weinstein and now – Sir Philip Green, the billionaire retailer whose brands include Topshop – much has been discussed about the legality and morality of using NDAs to prevent publicity or otherwise cover up  bad behaviour.

But like any legal document, NDAs are not inherently “good” or “bad”. They are simply a tool, regularly used by lawyers in many contexts. To understand why they have become controversial, and to contribute to the debate concerning their use and abuse, we must first consider their structure and purpose.

NDAs, which are also called Confidentiality Agreements, are simply a type of contract used to prevent someone from sharing confidential information in ways which are unacceptable or damaging to another person. What information is considered “confidential” depends very much on the situation, as well as the relationship between the person providing the information (“discloser“) and the person receiving it (“recipient“).

Use of the word “confidential” to mean “intended to be treated as private” dates from the 1770s, and has its roots in the Latin word confidentia. This means “firmly trusting,” and is itself derived from confidere, which means “to have full trust or reliance.” 

Confidential information is often shared for a business purpose or in corporate negotiations, especially when mergers or collaborations occur. For example, a restaurant chain looking for a deal with a food manufacturer may want to share recipes, or a fashion designer may seek a partnership with a well-known athlete who has sketches and drawings of a sports-inspired clothing range. Likewise, when a company hires a new employee, they may be given access to company client lists, manufacturing processes or other valuable data.

The basic anatomy of the NDA is relatively straight forward, and should always contain the following elements:

  • A clear definition of the confidential information.
    These are often heavily negotiated clauses, and it is usual to have very wordy and detailed definitions which set out explicitly what is and is not captured by the agreement. Sometimes, even the NDA itself is considered “confidential information,” which means that its terms or existence must be kept secret.The discloser will often want a broad definition of confidential information which covers not only the documents or products in question, but perhaps any derivative ideas, feedback, analysis or concepts created or inspired by the confidential information. On the other hand, the receiving party will want to keep this definition as narrow as possible.

 

  • The key obligation to keep the information secret.
    Standard wording will typically begin as follows: “In return for the discloser making confidential information available to the recipient, the recipient promises to the discloser that it shall keep the confidential information secret and confidential.”However, the obligation clause almost always contains many more rules and responsibilities. For example, the recipient may be prohibited from even indirectly sharing or hinting at the confidential information. They may also be prohibited from making copies, removing the information from a particular location, or storing it on their personal smartphone.

 

  • The ways in which the information can be used.
    The recipient will be prohibited from using or exploiting the confidential information except for the “purpose.” The purpose is the defined reason the information will be shared in the first place, for example, “to establish a collaboration in respect of the Tommy Hilfiger x Lewis Hamilton fashion line.”Disclosures of the information by the recipient to their employees and professional advisers (including lawyers and accountants) are usually permitted. In such cases, the discloser may ask that all individuals who receive the confidential information from the recipient sign a separate confidentiality agreement. While some may consider this a bit over the top, it makes sense from the discloser’s perspective that the receiver should take responsibility if its employees or advisers breach confidentiality.

 

  • What happens if the project or deal does not go ahead, and the duration of the secrecy.
    The discloser will often ask that the receiver returns or destroys the confidential information if the project or transaction fails to materialise. The parties should also establish a realistic time period for the duration of the secrecy, as it may be unreasonable to expect that the information has to remain confidential for eternity.
Image result for Lilly Panholzer
Lilly Panholzer for City finds it is easy to silence women with NDAs

Seems simple enough, so what’s all the fuss about?

As mentioned above, NDAs are incredibly common and used in a wide variety of situations, ranging from complex corporate takeovers to short-term collaborations. But despite their ubiquitous nature and seemingly straightforward terms, it would be a mistake to assume that these are simple contracts. 

It is rare for the parties entering the agreement to have perfectly equal bargaining power. Due to an imbalance of money, expertise, resources or even reputation, one of the parties involved will almost always be able to exert more influence over the other. This inherent imbalance can lead to the creation of NDAs which grant – or limit – rights in an unfair or improper way.

Entrepreneurs may think that an NDA adequately protects their valuable information when it is divulged to a potential investor. But unless the definitions and obligations are sufficiently locked down, little may prevent the investor from stealing the entrepreneur’s ideas.

Similarly, some unscrupulous companies may attempt to force their employees to enter into NDAs in an attempt to prevent whistleblowing or discrimination lawsuits. Matters can become very complex when an individual who has a grievance against a powerful boss is threatened with dismissal or further harassment, unless they sign an NDA. Moreover, a new common extension of NDAs is the inclusion of a “non-disparagement” clause. This goes beyond the protection of confidential information, and requires employees to never speak negatively about their employer or former employer.

In both the United States and the United Kingdom, lawmakers and courts have begun to establish clearer boundaries about the enforceable scope of NDAs. In the court of public opinion, powerful individuals who weaponise NDAs in an attempt to stifle access to justice, impair free speech and limit creativity are already losing. Regardless of the reason for entering a NDA, you owe it to yourself to ensure the document is checked first by a lawyer, and that your rights – and remedies – are adequately protected. 

 

Airbrushing history? Photos of Oxford Student Celebrations Raise Questions About Privacy Rights and Journalism

Airbrushing history? Photos of Oxford Student Celebrations Raise Questions About Privacy Rights and Journalism

A former Oxford University student asked image agency Alamy to remove photographs of her celebrating the end of exams. Now, the photographer accuses Alamy of “censoring the news”.  Is this a threat to freedom of the press, or has the woman’s human right of privacy been correctly protected?

The end of exams are a liberating and happy time for university students around the world. At Oxford, students take their celebrations to another level by partying en masse in the streets, covering each other in champagne, shaving foam, confetti, flour and silly string in a tradition known as “Trashing.”

Screenshot 2018-10-14 at 9.37.21 AM
An Alamy photo of Oxford celebrations from 1968. “Trashing” has become a bit more crazy since the 1990’s.

Speaking to the Press Gazette, Photographer Greg Blatchford explained that during the 2014 Trashing, a student invited him to take photographs of her celebrating on the public streets. Some of the images show her swigging from a bottle of champagne, while in others she is covered in silly string.

Blatchford then sent “about 20” images to Alamy as news content. The former student subsequently stated that she “loved” the images in email correspondence to Blatchford, and even shared them on Facebook. This summer, four years later, the woman contacted Alamy to have the photos deleted. The company removed the images – much to Blatchford’s dismay.

Screenshot 2018-10-14 at 9.37.58 AM
An Alamy stock image of Oxford University Trashing celebrations. Note: THIS IS NOT ONE OF THE SUBJECT PHOTOGRAPHS.

The right to be forgotten under the GDPR

Because the woman was able to be identified from the photographs, they constitute “personal data” as defined by Article 4 of the General Data Protection Regulation (GDPR). Under Article 17 GDPR, data subjects have the right in certain circumstances to compel the erasure of personal data concerning him or her.

For example, if the data was originally collected or used because the individual gave their consent, and that consent is subsequently withdrawn, the company may honour the request for deletion (Article 17(1)(b)). However, a company can also use a “counter attack” if an exception applies. Importantly for news and media agencies, if keeping the data is necessary for exercising the right of freedom of expression and information, they may be able to refuse the request and keep the data (Article 17(3)(a)).

For more details on how the right to be forgotten works in practice, see my earlier post, Now You’re Just Somebody That I Used to Know.

Are journalists under threat from privacy lawyers?

Blatchford explained that although they are now considered “stock images,” they were originally “news” photos and should not have been removed. By deleting the photos, Alamy “are censoring the news. I’m incensed that someone can influence news journalism and censor the past where clearly if photographs are taken in public, with the full consent of participants they can turn around and say ‘sorry, that’s not news’ later. This sets a precedent for anybody to walk up to a news organisation and say I don’t like the pictures of me. Journalists will then start feeling the threat of lawyers.”

In a statement to the Press Gazette, Alamy’s director of community Alan Capel said the images were submitted as news four years ago, but moved 48 hours later to the stock collection. “Therefore we are surprised that this is deemed to be ‘censoring the news.’ As per our contract with our contributors, we can remove any images from our collection if we see a valid reason to do so.”

The university said that participating in trashing can lead to fines and disciplinary action since it is against the university’s code of conduct
The comical images of students wearing sub fusc (formal academic attire) while partying are often published in newspapers around the country in May.

Privacy and press freedom have long been considered competing interests, but that’s not to say that striking an appropriate balance between the two is impossible.

On some level, I do sympathise with the photographer. I also struggle to buy Alamy’s argument that the images are not “news content” and are now “stock images.” The classification of an image should be based on its context, purpose and subject matter – not the time that has elapsed since the event, nor the label attributed to it on a website.

Stock images are, by definition, professional photographs of common places, landmarks, nature, events or people. By contrast, the Oxford Trashing photos are attributed to a specific time (May), place (Oxford), category of people (students), and event (celebrating the end of exams). They are popular for several reasons. Firstly, they illustrate a charming and comical juxtaposition. Although these students attend one of the oldest and most prestigious Universities in the world, they are – after all – entitled to a bit of fun. Secondly, Trashing has received increased press attention in recent years, as students have become subject to complaints fines, disciplinary action, and even police enforcement. These images clearly show, in ways that words alone cannot, matters of public interest.

Screenshot 2018-10-14 at 1.04.41 PM.png

In this particular instance however, I think Alamy have made the right decision in deleting the images.

Although the Press Gazette does not name the woman, it does note she is “a marketing director in New York.” It’s entirely plausible that she has valid concerns that the images of her participating in Trashing may negatively impact her reputation and career, or otherwise cause some sort of harm or embarrassment.

She claims that “there was no consent given to publish or sell my photos anywhere. I am not a model nor have given permission to any photographers to take photos of me to publicly display or to sell. This was a complete breach of privacy.” This contradicts what the email records show, but even if she had lawfully consented to the photographs being taken at the time, she is entirely within her rights to now withdraw consent. 

On balance, Alamy probably has dozens – if not hundreds – of images from the 2014 Trashing at Oxford. The likelihood that the images of this woman in particular are somehow especially newsworthy is minimal. Had Alamy refused to delete the photos, the woman would have been entitled to raise a complaint with the Information Commissioner’s Office. ICO enforcement action can include injunctions, sanctions, or monetary fines. Furthermore, Alamy would risk becoming known as an organisation that doesn’t care about privacy laws, thereby damaging its reputation.

Contrary to Blatchford’s concerns, it is doubtful that an organisation would delete a genuinely newsworthy image, simply because someone doesn’t like how they look. The right to be forgotten is not an absolute right to be purged from history, but a right to regain control of how information about you appears online.

For more details on how the right to be forgotten works in practice, see my earlier post, Now You’re Just Somebody That I Used to Know. If you’re interested in how celebrities control images of themselves, see Fame and Fortune: How do Celebrities Protect Their Image?

Header image by Alex Krook via Flickr