Forging Authenticity: Experts’ workshop on Deepfake Technology, Risks and Governance

Forging Authenticity: Experts’ workshop on Deepfake Technology, Risks and Governance

In September, I had the privilege of attending the Swiss Re Centre for Global Governance in Zürich, Switzerland for a two-day conference on deepfakes.

The conference was hosted by the International Risk Governance Center (IRGC), whose objective is to better understand emerging and systemic risks, as well as the governance of opportunities and risks associated with new technologies.  Because the conference was subject to the Chatham House Rule and a paper from the event is forthcoming, I can’t go into too much detail. However, I thought it might be nice to set out in broad terms the topics of discussion, and a few comments on what I found most interesting…. as well share some photos!

1

SUNDAY.

As the conference was scheduled for Monday and Tuesday, I flew out to Switzerland on the Sunday evening. In retrospect this was a very good decision, because the British Airways strike began the following day, and I was pleased to not have to worry about travel problems!

2
Dinner in the departure lounge! I had the chance to review some notes and watch the planes take off from London City Airport, over a nice glass of wine.
3
Transport in Zürich was super smooth. My plane landed at 22:47, and I was through security (with a stamp!) by 23:07. I was on the train by 23:15, which arrived at the Zürich Hauptbahnhof (Central Train Station) by 23:30. My hotel was just a few minutes’ walk from there!

MONDAY.

I woke up bright and early for the chance to have a morning walk through central Zürich, having never been to the city before. My hotel was right on the shore of the Zürichsee (Lake Zürich) and I had a really nice croissant for breakfast at Confiserie Sprüngli! I walked along Bahnhofstrasse, which  is Zürich’s main downtown street and also one of the world’s most expensive and exclusive shopping avenues. I was also feeling somewhat nervous about the conference – in a good way, of course! – so stepping out into the fresh autumnal air was a nice way to mentally prepare for the day ahead.

4

5

SWISS RE CENTRE FOR GLOBAL DIALOGUE

For those of you who might not know, Swiss Re is the world’s second largest reinsurance company.  Their Centre for Global Dialogue is located just outside of Zürich, with breathtaking views of Lake Zürich and the Alps. From my hotel room (pictured below), I could even see the lake!

r1

r2
my press photo! 😊

Image result for swiss re centre for global dialogue

c1.jpg
Getting settled in at the conference!

SETTING THE SCENE: DEEPFAKE TECHNOLOGY

The conference itself began in the early afternoon with a panel discussion by experts from IBM Watson, California-based tech company NVIDIA, and IDIAP, which is the research arm of the Ecole Polytechnique Fédérale de Lausanne. In this technology session, we discussed:

  • The technologies that have enabled deepfake creation and distribution.
  • The plausible trajectory for these technologies, and what the deepfake ecosystem might look like in five years.
  • Promising technologies for countering deepfakes, and what research advances might help reduce risks
  • Whether or not there are “information hazards” arguments for restricting access to research in this area, to prevent its use for malicious purposes.

Three things from this session really stuck in my mind. Firstly, the reminder that from a security standpoint, humans really are the biggest risk to any technological system. In particular, burnout poses a challenge, because we cannot stay hyper-vigilant at all times. Secondly, I found it interesting to note that detection is unlikely to be a winnable arms race. Watermarking and fingerprinting are good ideas in theory, but it would be difficult to create workable solutions. By way of example, if we require watermarks for certain media, would a lack of a watermark indicate that it’s a fake? Watermarks can be easily removed or added. The general consensus of the group was that the biggest risk posed by deepfakes is the degradation of standard notions of trust.

DEEPFAKE RISKS & VULNERABILITIES

The second session was led by representatives from Zurich Insurance, the French banking giant BNP Paribas, and the Swiss Federal Institute of Technology in Zurich. In this session covering various deepfake risks, we focused on the following points:

  • Are there reasons to worry more about deepfakes than about the other forms of deception and manipulation we’ve used throughout history?
  • Who or what is most at risk of harm: individuals, businesses, public institutions, or society at large?
  • What kinds of harm are of greatest concern? Harms could include fabricated evidence (such as insurance claims or judicial evidence), reputational damage, abuse/intimidation/extortion, manipulation of public opinion (including elections), and market manipulation.
  • Are there beneficial uses of deepfake technologies that need to be excluded from regulatory interference?

The key point of this discussion concerned the slippery slope between risk minimisation on the one hand, and the protection of certain liberties and economic freedoms on the other. It’s important to note that traditionally, threats posed by technology have been used to force through pernicious changes in the law or government surveillance. Just think back to post-9/11 USA PATRIOT Act, or even more recently to the UK’s Snoopers CharterTo minimise deepfake risks, we could certainly utilise certain forms of data monitoring, profiling and censorship, but to what end?         

DINNER AT THE VILLA

After an intense day of discussions and debate, we headed across the courtyard of the Centre to this beautiful Villa for drinks and dinner.

v1
Dinner was held at this stunning Villa, located on the same grounds as the Centre.
meme
gratuitous selfie
v3
Drinks and discussions about deepfakes – what a great combination!

TUESDAY.

t2
Tuesday morning’s sunrise view from my room!

LEGAL & REGULATORY RESPONSES

This session, covering things from a legal and regulatory perspective, was probably my favourite. It was also special for me, because it was my first time ever moderating a panel discussion! We discussed what existing laws/regulations can be applied to problematic deepfakes: for example, those concerning fraud, privacy, defamation, stalking, and electoral law.

Legislatures in the United States as well as the United Kingdom have for several years now sought to address online sexual harassment, with numerous jurisdictions criminalising so-called “revenge porn”. Given their initial popularity as manipulated pornographic videos, it seems only reasonable that some lawmakers have proposed specific bans of deepfakes which show obscene sexual activity. Furthermore, as of September 2019 Texas became the first state to criminalise deepfake videos made with intent to injure a political candidate or influence an election.

But are these legal instruments sufficient to address deepfake risks, or are new laws needed? In addition to the above, we also discussed:

  • The potential impact of deepfakes on the legal/judicial system, for example in terms of admissibility of audio/video evidence.
  • Whether or not there there is any need for – or prospect of – converging responses to deepfakes in different jurisdictions.

BREAK-OUT GROUP: CORPORATE 

After our coffee break on Tuesday, we divided up into smaller groups. I chose the Corporate and Insurance group – and I’m so glad I did, because I learned so much! Our main discussion focused on the potential financial risks to companies, investors, and markets more generally. Such risks could include fraud against customers, to deepfakes designed to manipulate company stock prices or whole markets. From an insurance perspective, we discussed whether deepfake technologies create new challenges for the insurance, in terms of vulnerability to fraudulent claims.

You may be wondering why the insurance industry cares about manipulated videos. In essence, it comes back to the point above about truth and trust. Today, many insurance claims can be supported through online evidence submissions: take, for example, a photograph of your car after someone rear-ends it. If insurance fraud goes up through the use of deepfakes – despite detection software – this increased risk will be transfers to the insured, and the premium will be raised. Without a doubt, we are living in a data-driven world, as insurance is gathering more and more data about activities connected to the policies. There is an ever-growing amount of data available thanks to the Internet of Things (IoT), credit checking websites, and public information: it’s easy to imagine the ways that deepfakes could threaten that stability.

t1.jpg

IMG_20190910_120912
After our break-out group sessions, we enjoyed a really (really!) nice gourmet buffet. Not pictured: me, chatting with really lovely and super smart people!

WHAT NEXT?

There’s only so much a group of lawyers, insurers, and computer scientists can cover in two days. In our final session, we discussed the questions that are likely to be unanswered… at least, for now.

  • What are the potential societal implications of deepfakes, in terms of levels of trust, standards of truth, and electoral manipulation?
  • What is the value of trust in the digital age?
  • What role do “technologies of trust” have in response to the decline of older norms and patterns of social trust?
  • At an individual and societal level, can anything be done to reduce the viral sharing of false and harmful content?
  • What are the immediate priorities — what decisions could be taken now to improve incentives around content authenticity and integrity?
  • Are there wider lessons to be learned from the deepfake phenomenon about the governance of emerging technologies?

There were so many insightful and thought-provoking moments during this conference. In conclusion, I just have to wonder if maybe we have taken “easy evidence” for granted. Was this technological evolution inevitable? Will the rise of the deepfake require us to place more faith in non-recorded instances of trust and truth, such as eye-witness reports? Perhaps the special privilege we have afforded to video — to digital truth — is ending.

Related image

DeepFakes and False Lights: what does the law say?

DeepFakes and False Lights: what does the law say?

What do Scarlett Johansson, cyber intelligence experts and some law makers have in common? Their shared concern about AI-generated videos. Known as “DeepFakes,” these videos can have damaging impact on reputations, emotional health, and even national security. But what is the legal status of this disruptive – and oftentimes disturbing – technology?

Deepfake – which combines “deep learning” and “fake” – is commonly defined as an artificial intelligence-based human image synthesis technique. Put simply, it’s a way to superimpose one face over another.

In December 2017, an anonymous Reddit user started a viral phenomenon by combining the machine learning software and AI to swap porn performers’ faces with those of famous actresses. Scarlett Johansson, one of the most highly-paid actresses in Hollywood, has herself been the victim of such “creations”. Speaking to the Washington Postshe explained that “nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired. There are basically no rules on the internet because it is an abyss that remains virtually lawless.”

It goes without saying that such fake porn videos can easily damage careers, emotional well-being, and a person’s sense of dignity and self-esteem. But there are other implications, too.

As a general starting point, it’s useful to have an understanding of what AI is – and isn’t. “Artificial Intelligence” is not another word for the robot overlords in Blade Runner or even Skynet’s Terminators. Rather, AI is fundamentally a machine-learning application whereby a computer is to fulfill a certain task on its own. What makes AI special is that machines are essentially “taught” to complete tasks that were previously done by humans, by doing the task over and over again.

With deepfakes, it doesn’t take long for the AI to learn the skill with eerie precision, and produce sophisticated (albeit artificial) images. The technology has many legitimate uses, especially in the film industry, where an actor’s face can be placed on their stunt double’s body. But thanks to continued advancement in the technology itself, the political and legal risks are higher than ever before.

On 29 January, US Director of National Intelligence Dan Coates spoke before the Senate Select Committee on Intelligence to deliver the Worldwide Threat Assessment, which had been compiled by the US intelligence community. The document sets out the biggest global threats in the following order: cyber, online influence operations (including election interference), weapons of mass destruction, terrorism, counterintelligence, emerging and disruptive technologies. 

Yes, cyber attacks and online influence operations are discussed before traditional weapons of mass destruction. The report even mentions deepfakes explicitly:

Adversaries and strategic competitors probably will attempt to use deep fakes or similar machine-learning technologies to create convincing—but false—image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners.

Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, explained that “we already struggle to track and combat interference efforts and other malign activities on social media — and the explosion of deep fake videos is going to make that even harder.” This is particularly relevant given the severe political polarization around the world today: from Brexit to Trump and everywhere in between, deepfakes could become powerful ways to spread more disinformation and distrust.

There are some legal remedies which may combat some of the more nefarious aspects of the deepfake. As explained by the International Association of Privacy Professionals (IAPP), in common law jurisdictions like the United States and the United Kingdom, the victim of a deepfake creation may be able to sue the deepfake’s creator under one of the privacy torts. By way of example, the false light tort requires a claimant to prove that the deepfake in question incorrectly represents the claimant, in a way that would be embarrassing or offensive to the average person.

Another potentially relevant privacy tort is that of misappropriation or the right of publicity, if the deepfake is used for commercial purposes. Consider, for example, if someone made a deepfake commercial of Meghan, the Duchess of Sussex endorsing a certain makeup brand. Since individuals generally do not own the copyright interest in their own image (i.e., the photograph or video used to make a deepfake) copyright law is not a good remedy to rely upon. Instead, Meghan could argue that the deepfake misappropriated her personality and reputation for someone else’s unauthorised commercial advantage. However, it’s important to note that personality rights are frustratingly nebulous here in the United Kingdom, as I explained in Fame and Fortune: how celebrities can protect their image

Depending on the nature of the deepfake, a victim may also be able to sue for the intentional infliction of emotional distress, cyberbullying, or even sexual harassment. But in many instances, the burden of proof to establish these claims can be a notoriously difficult standard to meet.

Furthermore, the practical challenges of suing the creator of a deepfake are considerable. Firstly, such creators are often anonymous or located in another jurisdiction, which makes legal enforcement very difficult. Although a victim could request that the creator’s internet company (ISP) remove the deepfake, establishing what is known as “online intermediary liability” and forcing an ISP to get involved can be an uphill battle in and of itself (this was the topic of one of my papers in law school). As for the victim exercising their right to be forgotten under the EU’s General Data Protection Regulation (Article 17, GDPR), the same problem arises: who is responsible for taking down the deepfake?

Secondly, the creator may lodge a defense of free speech or creative expression, especially if the deepfake victim is a political figure or otherwise in the public spotlight. This may beg the question, to what extent is a deepfake depicting a member of parliament any different from a satirical cartoon or parody? Unless the deepfake is outrageously obscene or incites actual criminal behaviour, it may be nearly impossible to take legal action.

Deepfakes are but one of many instances where the law has not quite kept up with the rapid development of new technology. Although issues like these keep technology lawyers like myself employed, the potential for genuine harm caused by deepfakes in the wrong hands cannot be overstated. It should be fairly clear that outlawing or attempting to ban deepfakes is neither possible nor desirable, but perhaps increased regulation is a viable option. Deepfakes could be watermarked or labelled before being shared by licensed or regulated entities (for example, news organisations) much in the same way that airbrushed models in advertisements are labelled in France. Doing so may at least slow down the proliferation of deepfakes purporting to be genuine.

But until then, the only advice remains that you shouldn’t believe everything you read – or see, or hear – online.

Update 14 June 2019: The European Commission has released a joint Report on the implementation of the Action Plan Against Disinformation – available here.

Is posting rap lyrics on Instagram a #HateCrime?

Is posting rap lyrics on Instagram a #HateCrime?

A teenager who posted rap lyrics on Instagram has been convicted of “sending a grossly offensive message over a communications network,” which was uplifted to a hate crime. She has received a community order (probation) and must pay costs of £500 ($700 USD) together with a £85 victim surcharge.

Chelsea Russell, a 19 year-old woman from Liverpool, England posted the lyrics from American rapper Snap Dogg’s song, “I’m Trippin” (NSFW) onto her Instagram account profile, or “bio.” The lyrics in question were ‘kill a snitch n—-a and rob a rich n—-a.’

Russell claimed she posted the lyrics as a “tribute” following the death of a 13-year-old in a road traffic accident. A screen shot of her profile was sent anonymously to Merseyside Police’s Hate Crime Unit, and Russell was brought in for questioning. She attempted to claim that her Instagram was not public as only Instagram members could access it, but the Crown proved in court that anyone could see Russell’s bio.

Russell’s defence lawyer, Carole Clark, claimed that the meaning of the ‘n’ word has changed over time, and has been popularised by hugely successful and mainstream artists including Jay-Z, Eminem and Kanye West. In particular, “Jay-Z used these words in front of thousands of people at the Glastonbury festival.” Clark also pointed out that the spelling of the word ended in an “-a,” rather than the arguably more pejorative “-er.” Nevertheless, District Judge Jack McGarva found Russell guilty and added, “there is no place in civil society for language like that.”

Under section 127 of the Communications Act 2003, it is an offence to send a message that is grossly offensive by means of a public electronic communications network. Section 145 of the Criminal Justice Act 2003 provides for an increased sentence for aggravation related to race.

Under the Crown Prosecution Service’s guidelines on prosecuting cases involving communications sent via social media, before heading to trial, there must be sufficient evidence that the communication in question is more than:

  • Offensive, shocking or disturbing; or
  • Satire, parody, iconoclastic or rude; or
  • The expression of unpopular or unfashionable opinion, banter or humour, even if distasteful to some or painful to those subjected to it.
“Freedom of expression is applicable not only to ideas that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State, or any sector of the population.”
—European Court of Human Rights decision in Handyside v United Kingdom (1976)
It may be tempting to see this verdict as an overreach by Crown Prosecution Services, or “political correctness” gone mad. However, the lyrics Russell chose to share don’t simply drop the N-bomb. These lyrics may be understood as encouraging killing and robbing of a particular type of person — ie, black men.

The distinction between “offensive” and “grossly offensive” is an important one and not easily made. Context and circumstances are highly relevant.  The legal test for “grossly offensive” was stated by the House of Lords in Director of Public Prosecutions v Collins [2006] UKHL 40 to be whether the message would cause gross offence to those to whom it relates – in that case ethnic minorities – who may, but need not be the recipients.

Accordingly, there is a high threshold at the evidential stage. The Crown must also consider an author’s rights of expression enshrined in the European Convention of Human Rights. Extreme racist speech is outside the protection of Article 10 because of its potential to undermine public order and the rights of the targeted minority (Kuhnen v Germany 56 RR 205).

Article 10 – Freedom of expression

1. Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.

2. The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary.

Russell’s case brings to mind the similar legal battle of Paul Chambers. Frustrated that his travel plans had been disrupted by bad weather, in 2010 he tweeted about blowing up an airport and was subsequently arrested. His conviction attracted public outroar and became a cause célèbre for freedom of speech activists before being overturned on appeal (Chambers v Director of Public Prosecutions [2012] EWHC 2157).

However, there are considerable differences of fact between Russell’s Instagram bio and Chambers’s tweet. Chambers’s tweet mentioned the weather delay — the all important context — and his “threat” lacked menace, because it did not create fear or apprehension in those who read it. Russell may have been quoting a song lyric, but isolated from any other information, her words could reasonably be (mis)interpreted as a genuine threat.

Unfortunately, only limited information is available on Russell’s case, so it is not possible to fully analyse how the Crown determined that it was indeed in the public interest to pursue prosecution. I would assume however that there were some extenuating circumstances. Perhaps Russell had a history of offensive behaviour, or maybe the prosecution proved that the lyrics were intended to cause malicious upset to a grieving family?

While the legal principles and their application may be uncertain in situations such as these, this case underscores the need for a cautious approach to social media. At times, even though I recognise intellectually that my Twitter and Instagram feeds are “public,” the fact that I share personal insights and photos makes the platform seem perhaps more intimate and secure than it really is. Social media is like any other “community,” for which certain rules of decorum do apply.

No more Safe Harbours for EU-ser Uploaded Content?

No more Safe Harbours for EU-ser Uploaded Content?

The European Union is considering a sweeping new Directive on Copyright in the Digital Single Market, currently in draft stages. Industry groups are keen to ensure their opinions are taken into consideration, especially in instances where consumers share content which belongs to artists, authors, record labels, and television channels.

Digital platforms and internet service providers which host User Uploaded Content (UUC) argue that they are not responsible for any copyright infringing material uploaded by their users. However, trade bodies representing various industries believe the incoming Copyright in the Digital Single Market Directive doesn’t go far enough to reform this safe harbour principle.

The E-commerce Directive states that EU Member States shall ensure that internet service providers are not liable for copyright infringements carried out by its customers, on condition that: (a) the ISP does not have actual knowledge of illegal activity or information;  and (b) the provider “acts expeditiously to remove or to disable access” to the illegal content, once they become aware of it (see Article 14).

This article provides ISPs with a “safe harbour” from copyright liability (also known as the “mere conduit” provision). Generally speaking, a safe harbour* is simply a protection available within a regulation that specifies that certain actions do not to violate a given rule, in particular circumstances.

1709 - EU Safe Harbour
In the United States, this principle operates under the “notice-and-take-down system”

About 18 months ago, the European Commission announced its plans to introduce a new Directive on Copyright in the Digital Single Market. As the explanatory memorandum sets out, “the evolution of digital technologies has changed the way works and other protected subjectmatter are created, produced, distributed and exploited. In the digital environment, cross-border uses have also intensified and new opportunities for consumers to access copyright-protected content have materialised. Even though the objectives and principles laid down by the EU copyright framework remain sound, there is a need to adapt it to these new realities.”

Amongst other things, the propsed Directive seeks to rebalance the position of the copyright owner against that of the internet service provider. Last week, various trade groups representing Europe’s creators and creative content producers published an open Letter to the European Council.

The authors suggest that, far from ensuring legal certainty, the Directive as currently drafted “could be detrimental to our sectors,” which include journalism, film and TV, music, and sport. While the authors support the objectives of the proposed legislation, the Letter critiques the latest draft of the directive, and expresses significant concerns about the safe harbour reforms.

In particular, the problems seem to arise with sections addressing the “use of protected content” by ISPs and other platforms which “store and give access to large amounts of works and other subject-matter uploaded by their users”. Put simply, the copyright industries want the safe harbour reformed, so that it no longer applies to user-upload sites (Complete Music Update).

This draws into question how online platforms hosting UUC should monitor user behaviour and filter their contributions. Currently, the platforms review material after it has been published and reported or “flagged” as copyright infringement. This may, as has been discussed with Facebook’s proposed use of artificial intelligence in copyright and hate speech monitoring, “inevitably require an automated system of monitoring that could not distinguish copyright infringement from legal uses such as parody” (The Guardian).

The authors of the Letter voice complaints in respect of the draft forms of Article 2, Article 13(1) and Article 13(4):

  • Article 2 defines which services fall under liability, mentioned further at Article 13. The latest draft could leave most UUC platforms outside the scope, despite the fact they continue to provide access to copyright protected works and other subject-matter. For example, music playing in the background of a makeup tutorial on YouTube.
  • The problem with Article 13(1) as currently written is that it risks narrowing the scope of the right and contravening CJEU jurisprudence. The Letter’s authors argue that “any new EU law should secure that this right is broad,” and “contain no additional criteria which could change via future CJEU rulings.”
  • As for Article 13(4) and its relevant recitals, the authors suggest the language is tantamount to a new safe harbour, which would both “seriously undermine fundamental principles of European copyright,” and pose “unwarranted liability privilege risks breaching the EU’s obligations under international copyright treaties.”

The Letter closes with the authors’ promise to “remain at the Council’s disposal to find solutions to these points.” For more on the proposed Directive, be sure to check out the IPKat’s numerous posts on the subject.

*This “Safe Harbour” in copyright law is not to be confused with the Safe Harbor Data Privacy exemptions between the US and the EU, which have since been declared invalid. On that subject, I might write on the new Privacy Sheild… at some point…

The Six Principles of Data Protection: Facebook fails

The Six Principles of Data Protection: Facebook fails

Facebook may believe that dubious data collection and security practices justify a more connected audience: the incoming General Data Protection Regulations say differently.

Once again, data privacy is in the headlines. But this time, it isn’t a credit agency or department store that has fallen short of consumer expectations: instead, it’s Facebook. Much credit is due to Carole Cadwalladr and her team at The Guardian, who first broke the the Cambridge Analytica story.

#DeleteFacebook was trending on Twitter for a while, and I myself was considering ditching my account – not least because I simply don’t use Facebook often. While I’ve decided against deletion, I was genuinely saddened – although, in retrospect, not surprised – to come across the leaked 2016 “Ugly Truth” Memo from a Facebook executive Andrew “Boz” Bosworth. You can see the Memo in full at Buzzfeed, but the part that hit me hardest reads as follows:

We connect people. Period.

That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.

The natural state of the world is not connected. It is not unified. It is fragmented by borders, languages, and increasingly by different products. The best products don’t win. The ones everyone use win.

“Questionable contact importing practices”? By Bosworth’s own admission, “the ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good.”

The General Data Protection Regulations (GDPR) say differently. With less than two months to go until the implementation date of 25 May (!) I’ve set out a little refresher on the main responsibilities for organisations below.

Article 5 of the GDPR contains Six Principles of personal data collection and processing. The data controller (the company collecting or otherwise controlling the data) are responsible for, and must be able to demonstrate, compliance with these principles.

(A) Processed lawfully, fairly and in a transparent manner.
A company collecting data must make it clear as to why the data are being collected, and how the data will be used. The company must provide details surrounding the data processing when requested to do so by a person whose data is collected (the “data subject”). “Questionable practices” are likely neither fair nor transparent!

(B) Collected for specified, explicit and legitimate purposes.
Have you ever filled in a form, only to think, “why am I being asked this question?” This principle states that organisations should not collect any piece of personal data that doesn’t have a specific purpose, and a data subject must give explicit consent for each purpose. A lawful purpose could mean fulfilling a contract: for example, your address is required for shipping something you bought online.

(C) Adequate, relevant and limited to what is necessary.
Companies strive to understand customer buying behaviours and patterns based on intelligent analytics, but under this principle, only the minimum amount of data required may be stored. Asking for one scanned copy of a drivers’ licence may be adequate, but asking for a drivers’ licence, passport, and birth certificate might be more than necessary.

(D) Accurate and, where necessary, kept up to date.
Controllers must ensure personal data is accurate, valid and fit for purpose. Accordingly, data subjects have the right under Article 16 (Right of Rectification) to rectify any personal data held about themselves.

(E) Kept for no longer than is necessary.
This principle limits how data are stored and moved, and for how long. When data is no longer required, it should be deleted. This is closely related to the Right of Erasure (“Right to be Forgotten”) under Article 17, which I previously wrote about in respect of the Google case in England.

(F) Processed in a manner that ensures appropriate security.
This principle is perhaps what most people think about when they think of data protection. It means that IT systems and paper records must be secure, and the security must be proportionate to the risks and rights of individual data subjects. Negligence is no longer an excuse under GDPR!

In 2016, a Gallup study found that Millennials (those of us born between 1981 and 1996) are generally aware of potential data security risks, but less likely to be concerned about them. Prior to familiarising myself with these principles, I simply thought data protection was another phrase for “IT security”. I thought it was just about firewalls, encryption, and outsmarting hackers.

But in the months I’ve been helping clients to get ready for the GDPR, I’ve realised that compliance is about more than just having strong passwords: it really is a mindset. That’s what’s so disappointing about Facebook’s apparent attitude towards the end consumer, in which people are seen only as a series of clicks or “likes” which can be analysed, predicted, and manipulated – at any cost. My Facebook account may remain active, but I for one will certainly be less engaged.

Photo credit – Book Catalogue

Project Gutenberg: the German edition?

Project Gutenberg: the German edition?

Project Gutenberg is an American website which digitises and archives cultural works to encourage the creation and distribution of eBooks. It currently offers 56,000 free books for download, including classics such as Pride and Prejudice, Heart of Darkness, Frankenstein, A Tale of Two Cities, Moby Dick, and Jane Eyre. Many of these titles are available because their copyright protections have expired in the United  States, and are therefore in the public domain. The website is a volunteer effort which relies mostly on donations from the public.

What does it mean if a book is in “the public domain”? This term means that something (a novel, artwork, photograph or other creation) is not protected by intellectual property law, including copyright, trade mark, or patent. Accordingly, the general public owns the work, and not the individual creator. Permission is therefore not required to use the creation.

Despite the noble cause of making literature available at no or low cost to the masses, a recent ruling against Project Gutenberg has resulted in the website being geo-blocked for all visitors attempting to access the site from Germany. The claimants in the case are the copyright owners of 18 German language books, written by three authors, each of whom died in the 1950s.

In Germany, the term of copyright protection for literary works is “life plus 70 years,” as it is in the United States. However, the United States applies different rules for works published before 1978. For works published before 1978, the maximum copyright duration is 95 years from the date of publication. In the United States, the 18 books in question are all in the public domain. For the avoidance of doubt, Project Gutenberg runs on servers at the University of North Carolina at Chapel Hill, and is classified as a non-profit charity organisation under American law.

Sharing and accessing the written word has changed since the 16th century! Engraving showing a publisher’s printing process, from the Met Museum.

The copyright holders of these works notified Project Gutenberg of their alleged infringement back in 2015. In early February 2018, the District Court of Frankfurt am Main approved the claimant’s “cease and desist” request to remove and block access to the 18 works in question. The claimants also requested administrative fines, damages, and information in respect of how many times each work was accessed from the website.

 

Our eBooks may be freely used in the United States because most are not protected by U.S. copyright law, usually because their copyrights have expired. They may not be free of copyright in other countries. Readers outside of the United States must check the copyright terms of their countries before downloading or redistributing our eBooks.

The Court reasoned that it was worth taking into account the fact that the works in dispute are in the public domain in the United States. This however “does not justify the public access provided in Germany, without regard for the fact that the works are still protected by copyright in Germany.” The simple message on the front page (cited above) may not be sufficient to draw users’ attention to the fact that what they are downloading may be in contravention of national copyright laws.

The judgement also cited Project Gutenberg’s own T&Cs in its decision, noting that the website considers its mission to be “making copies of literary works available to everyone, everywhere.” While this broad statement may seem innocuous and idealistic, the court used this to support its findings that Project Gutenberg could not reasonably limit itself as an America-only website.

A key point in this matter is the question of jurisdiction. While Project Gutenberg is based in the USA, the claimants successfully argued that as the works were in German and parts of the website itself had been translated into German, the website was indeed “targeted at Germans.” Furthermore, even if the website had not been intended for German audiences, that the infringement occured in Germany is sufficient grounds to bring the claim in German court.

While Project Gutenberg was only required to remove the 18 works listed in the lawsuit, the organisation has blocked its entire website in Germany to protect itself from any further potential lawsuits on similar grounds (see the Q&A here). Project Gutenberg is planning to appeal the decision.

This first published on the 1709 Copyright Blog. You can also read more at the IPKat here.

 

From Stockholm to Stock Market: Sweden’s Spotify set to list on NYSE

From Stockholm to Stock Market: Sweden’s Spotify set to list on NYSE

Music streaming giant Spotify recently filed its application to put shares on the New York Stock Exchange. The 264 page document details the company’s key risks and challenges: I’ve read them so you don’t have to!

The Securities Exchange Act of 1933, often called the Truth in Securities law, requires that investors receive financial and other significant information concerning financial securities. To avoid misrepresentations and other fraud, any company wishing to place its shares on an American market must submit a prospectus, formally known as an SEC Form S-1 (or an F-1 for foreign companies).

Sweden-based Spotify filed their prospectus for the New York Stock Exchange on 28 February.  Prospectuses are heavily regulated, and accuracy is vital: it is a lawyer’s job to fact-check these documents in a process known as “verification.” To allow investors to make informed decisions, a company must be honest about its particular commercial situation, and explain how share prices may decline. Spotify’s estimated valuation is nearly $20 billion, but it has never made a profit and reports net losses of €1.2bn (£1.1bn).

Spotify clearly needs a capital injection,
but given the risks below, would you invest?

Hitting the right note with listeners.
Spotify’s unique features include advanced data analytics systems and proprietary algorithms which predict music that users will enjoy. These personalised streams rely on Spotify’s ability to gather and effectively analyse large amounts of data, together with acquiring and categorising new songs that appeal to “diverse and changing tastes.” If Spotify fails to accurately recommend and play music that customers want, the company may fail to retain or attract listeners.

Screenshot 2018-03-02 at 11.16.33 PM.png
Spotify knows that 71% of my recent tunes are energetic, upbeat, and suitable for a fitness enthusiast. Touché!

Licensing and royalties.
To make its 35 million tracks available for listeners, Spotify requires licenses from the musicians and record labels who own the songs. Additionally, Spotify has a complex royalty payment scheme, and it is difficult to estimate the amount payable to musicians under their license agreements. Even if Spotify secures the necessary rights to sound recordings from record labels and other copyright owners, artists may wish to discontinue licensing rights, hold back content, or increase their royalty fees. In 2014, Taylor Swift removed her songs from the streaming service in protest, although she later added it back.

Technical glitches and data protection.
Spotify’s software and networks are highly technical and may contain undetected bugs or other vulnerabilities, which could seriously harm their systems, data, and reputation. Growing concerns regarding privacy and protection of data, together with any failure (or appearance of failure) to comply with data protection laws, could diminish the value of Spotify’s service. This especially worth noting as Europe nears the General Data Protection Regulation (GDPR) implementation date of 25 May.

spotify nyc.jpg
Spotify’s NYC offices

Innovation and skilled employees.
Rapid innovation and long-term user engagement is prioritised over short-term financial gain. Spotify admits “this strategy may yield results that sometimes do not align with the market’s expectations.” The company also depends on highly skilled personnel to operate the business, and if they are unable to attract, retain, and motivate qualified employees, the ability to develop and successfully grow the company could be harmed.

International regulation and taxation.
As Spotify expands into new territories, it must adhere to a variety of different laws, including those in respect of internet regulation and net neutrality. Spotify even admits that language barriers, cultural differences, and political instability can bring share prices down! Furthermore, public pressure continues to encourage governments to adopt tougher corporate tax regimes, and tax audits or investigations could have a material adverse effect on the company’s finances.

Image result for bloomberg spotify

Method of offering.
While Spotify may not be able to successfully overcome each challenge listed in its prospectus, many of the risks are relatively common amongst international technology and media companies. But as an additional risk, Spotify has chosen a relatively unconventional method known as a direct public offering (DPO) to bring its shares to the stock market. Unlike a traditional IPO, in a DPO a company will not use an investment bank to market or underwrite (insure) its offering. While this avoids bank fees, uncertainty can result in a discounting of share prices. This is a really technical point and somewhat nuanced (it gave me headaches in law school!) but a risk worth noting.

I’ve written previously about Spotify’s copyright challenges, as well as its controversial privacy policy