Have European laws improved American privacy protections?

Have European laws improved American privacy protections?

The European Union’s landmark data privacy law, the General Data Protection Regulation (GDPR) went into effect one year ago this week. By now, the implications for European residents and companies are fairly well known. Many of us will have received updated privacy policies in our email inboxes, or become increasingly aware of headline-grabbing stories on mass data breaches. But what about beyond the borders of Europe? Has GDPR changed the way in which data protection and privacy matters are viewed in the United States? 

The first thing to consider is whether GDPR has the power to influence how American companies handle data. The answer is yes. The GDPR is a single legal framework that applies across all 28 EU member states – including, for the time being, the United Kingdom. But in a considerable departure from the old Data Protection Directive (95/46/EC), the GDPR imposes an expanded territorial scope beyond the EU itself. No matter where they are located around the world, companies must comply with the GDPR if they either offer goods or services to European residents, or monitor their behavior (see, inter alia, Recital 22).

These new regulations are not without teeth. Whereas fines under the previous directive generally maxed out at £500,000, fines under GDPR can reach up to 20 million euros or 4% of a breaching company’s global turnover. Accordingly, from 25 May 2018, many American companies became subject to European privacy laws for the first time, and faced considerably enhanced sanctions for noncompliance.

As a result, in the lead-up to GDPR taking effect, many Europeans were geo-blocked from accessing American websites. The reason? If European customers were blocked from accessing the websites, the companies would not technically be “offering their goods or services” to Europeans, nor would they be “monitoring their behavior”.

Although the majority of companies retreating from Europe were small to medium-sized technology companies, others included global names such as the Los Angeles Times (US small businesses drop EU customers over new data rule, Financial Times).

dims.jpg

The other approach taken by US companies was to move data centres and servers from Europe to the United States. Facebook made headlines by shifting data concerning more than 1.5 billion users from Ireland to its main offices in California. Although Facebook told Reuters that it applies “the same privacy protections everywhere, regardless of whether your agreement is with Facebook Inc [California] or Facebook Ireland,” representatives from the social media giant noted that “EU law requires specific language” in mandated privacy notices, whereas American law does not.

Has the GDPR made Europe “too chilled” for American tech companies? It is important to note that users impacted by Facebook’s server relocation mentioned above were non-EU users. Furthermore, the data migration does not release Facebook from its obligation to comply with the GDPR, insofar as European users are concerned. Nevertheless, the relocation underscores the point that the United States is often seen as a more friendly home for companies seeking fewer, less stringent privacy regulations.

Several companies which initially fled the long-armed reach of the GDPR have returned to Europe, albeit with significantly changed privacy notices and data protection practices. However, many have stayed away. Some privacy advocates will hail the departure of American tech companies who are unwilling to comply with the new privacy rules. But while it is true that privacy protection is an important and fundamental human right, it cannot be ignored that an increasing body of evidence suggests the GDPR has had a chilling effect on a wide variety of overseas companies.

According to a recent study by the Illinois Institute of Technology and the National Bureau of Economic Research, there has been an 18% decrease in the number of EU venture deals and a 40% decrease in the dollar amount per deal following GDPR implementation (The Short-Run Effects of GDPR on Technology Venture Investment).

Together with increased European regulations of the digital economy on the whole, it is arguable that lawmakers in Brussels are making it more difficult for American companies to enter the European market. Even for those that decided to remain in the EU despite the enhanced regulations, their future remains uncertain.

Will the GDPR inspire privacy laws in the United States? Given that US companies – even those located in America – must now play by European privacy rules in order to reach the EU market, it is arguable that various technology and media entities will start to impose tougher privacy standards on themselves. Such self-regulation is likely to be welcomed by technology professionals and corporate insiders, who may consider themselves better positioned than regulators and lawmakers to tackle the problems of privacy in a digital age. However, as we have seen in sectors ranging from pharmaceuticals to finance, self-regulation often falls short when it comes to consumer protection.

 

zb
In April 2018, Facebook founder Mark Zuckerberg was called before the US Senate to answer questions over Facebook’s responsibility to safeguard user privacy and the Cambridge Analytica scandal.

For a variety of reasons which fall beyond the scope of this post, the privacy laws of the United States have developed in an ad hoc fashion. Apart from the Children’s Online Privacy Protection Act (COPPA) and the Health Insurance Portability and Accountability Act (HIPPA), few national laws exist to protect data privacy.

Instead, in the United States, companies are caught under different laws depending on which State they are headquartered in, or where they do business. Any applicable federal laws which touch on data privacy are most often to regulate specific industry sectors, such as health insurance mentioned above. Even in the wake of the Equifax data breach of summer 2017 – which affected over 145 million US consumers – attempts to improve consumer privacy protections have failed to pass in Congress.

Despite the lack of federal legislation, some American states are using their powers to pass laws at a more local level. One such state is California, which happens to boast both the world’s fifth largest economy, as well as one of the most impressive technology industries. Last year, California Governor Jerry Brown signed the California Consumer Privacy Act (CCPA) into law.

While at only 12 pages the law is a far cry from the obviously more comprehensive GDPR, it does grant California consumers specific rights over their personal information held by companies. Perhaps most interestingly, because the CCPA applies to any company which does business with California residents, the law will likely have a major impact on the privacy landscape across the country.

This begs the question: if the United States is in need of enhanced privacy protections, who should spearhead the endeavour? The US federal government via Congress, state legislators, or companies themselves? Some believe consumers will be better protected if Congress resists the temptation to intrude at federal level, to allow the states to experiment with their own legislation.

As we have seen in Europe, it is abundantly clear that any single privacy framework must be both flexible, as well as scalable, across a variety of industry sectors, geographies, and company types. To add to the political complexity, powerful industry players will likely lobby for special exceptions, and various federal agencies may clash over who will enforce any such regulation(s).

In conclusion, it is safe to say that the GDPR has indeed changed the way in which data protection and privacy matters are viewed outside of Europe. But the direction with which the Americans will choose to take it remains unclear.

On the one hand, some American companies have retreated from the EU. On the other, local governments have begun to take consumer privacy more seriously, by introducing new domestic data protection legislation. To find a balance between the two forces of economic enterprise and regulatory powers may be difficult. More likely, there may be a push and pull effect; whether privacy will prevail is yet to be seen.

DeepFakes and False Lights: what does the law say?

DeepFakes and False Lights: what does the law say?

What do Scarlett Johansson, cyber intelligence experts and some law makers have in common? Their shared concern about AI-generated videos. Known as “DeepFakes,” these videos can have damaging impact on reputations, emotional health, and even national security. But what is the legal status of this disruptive – and oftentimes disturbing – technology?

Deepfake – which combines “deep learning” and “fake” – is commonly defined as an artificial intelligence-based human image synthesis technique. Put simply, it’s a way to superimpose one face over another.

In December 2017, an anonymous Reddit user started a viral phenomenon by combining the machine learning software and AI to swap porn performers’ faces with those of famous actresses. Scarlett Johansson, one of the most highly-paid actresses in Hollywood, has herself been the victim of such “creations”. Speaking to the Washington Postshe explained that “nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired. There are basically no rules on the internet because it is an abyss that remains virtually lawless.”

It goes without saying that such fake porn videos can easily damage careers, emotional well-being, and a person’s sense of dignity and self-esteem. But there are other implications, too.

As a general starting point, it’s useful to have an understanding of what AI is – and isn’t. “Artificial Intelligence” is not another word for the robot overlords in Blade Runner or even Skynet’s Terminators. Rather, AI is fundamentally a machine-learning application whereby a computer is to fulfill a certain task on its own. What makes AI special is that machines are essentially “taught” to complete tasks that were previously done by humans, by doing the task over and over again.

With deepfakes, it doesn’t take long for the AI to learn the skill with eerie precision, and produce sophisticated (albeit artificial) images. The technology has many legitimate uses, especially in the film industry, where an actor’s face can be placed on their stunt double’s body. But thanks to continued advancement in the technology itself, the political and legal risks are higher than ever before.

On 29 January, US Director of National Intelligence Dan Coates spoke before the Senate Select Committee on Intelligence to deliver the Worldwide Threat Assessment, which had been compiled by the US intelligence community. The document sets out the biggest global threats in the following order: cyber, online influence operations (including election interference), weapons of mass destruction, terrorism, counterintelligence, emerging and disruptive technologies. 

Yes, cyber attacks and online influence operations are discussed before traditional weapons of mass destruction. The report even mentions deepfakes explicitly:

Adversaries and strategic competitors probably will attempt to use deep fakes or similar machine-learning technologies to create convincing—but false—image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners.

Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, explained that “we already struggle to track and combat interference efforts and other malign activities on social media — and the explosion of deep fake videos is going to make that even harder.” This is particularly relevant given the severe political polarization around the world today: from Brexit to Trump and everywhere in between, deepfakes could become powerful ways to spread more disinformation and distrust.

There are some legal remedies which may combat some of the more nefarious aspects of the deepfake. As explained by the International Association of Privacy Professionals (IAPP), in common law jurisdictions like the United States and the United Kingdom, the victim of a deepfake creation may be able to sue the deepfake’s creator under one of the privacy torts. By way of example, the false light tort requires a claimant to prove that the deepfake in question incorrectly represents the claimant, in a way that would be embarrassing or offensive to the average person.

Another potentially relevant privacy tort is that of misappropriation or the right of publicity, if the deepfake is used for commercial purposes. Consider, for example, if someone made a deepfake commercial of Meghan, the Duchess of Sussex endorsing a certain makeup brand. Since individuals generally do not own the copyright interest in their own image (i.e., the photograph or video used to make a deepfake) copyright law is not a good remedy to rely upon. Instead, Meghan could argue that the deepfake misappropriated her personality and reputation for someone else’s unauthorised commercial advantage. However, it’s important to note that personality rights are frustratingly nebulous here in the United Kingdom, as I explained in Fame and Fortune: how celebrities can protect their image

Depending on the nature of the deepfake, a victim may also be able to sue for the intentional infliction of emotional distress, cyberbullying, or even sexual harassment. But in many instances, the burden of proof to establish these claims can be a notoriously difficult standard to meet.

Furthermore, the practical challenges of suing the creator of a deepfake are considerable. Firstly, such creators are often anonymous or located in another jurisdiction, which makes legal enforcement very difficult. Although a victim could request that the creator’s internet company (ISP) remove the deepfake, establishing what is known as “online intermediary liability” and forcing an ISP to get involved can be an uphill battle in and of itself (this was the topic of one of my papers in law school). As for the victim exercising their right to be forgotten under the EU’s General Data Protection Regulation (Article 17, GDPR), the same problem arises: who is responsible for taking down the deepfake?

Secondly, the creator may lodge a defense of free speech or creative expression, especially if the deepfake victim is a political figure or otherwise in the public spotlight. This may beg the question, to what extent is a deepfake depicting a member of parliament any different from a satirical cartoon or parody? Unless the deepfake is outrageously obscene or incites actual criminal behaviour, it may be nearly impossible to take legal action.

Deepfakes are but one of many instances where the law has not quite kept up with the rapid development of new technology. Although issues like these keep technology lawyers like myself employed, the potential for genuine harm caused by deepfakes in the wrong hands cannot be overstated. It should be fairly clear that outlawing or attempting to ban deepfakes is neither possible nor desirable, but perhaps increased regulation is a viable option. Deepfakes could be watermarked or labelled before being shared by licensed or regulated entities (for example, news organisations) much in the same way that airbrushed models in advertisements are labelled in France. Doing so may at least slow down the proliferation of deepfakes purporting to be genuine.

But until then, the only advice remains that you shouldn’t believe everything you read – or see, or hear – online.

 

Facebook and Privacy: cases, reports and actions in Europe

Facebook and Privacy: cases, reports and actions in Europe

A list of European enforcement action, official legislative (Parliamentary) reports, and cases concerning Facebook with respect to data protection and privacy. This is a work in progress, last updated November 2018.

Data Protection Commissioner (Ireland) v Facebook Ireland Limited, Maximillian Schrems [Case C-311/18]

  • Jurisdiction: European Union, Ireland
  • Status: Case still in progress
  • Authority:  Court of Justice of the European Union
  • Keywords: EU Data Protection Directive (95/46/EC); EU/US Privacy Shield; Fundamental Rights

Continue reading “Facebook and Privacy: cases, reports and actions in Europe”

Transatlantic Data Transfers: US-EU Privacy Shield under review

When personal data travels between Europe and America, it must cross international borders lawfully. If certain conditions are met, companies can rely on the US-EU Privacy Shield, which functions as a sort of “tourist visa” for data. 

Earlier this week (19 November) the United States Federal Trade Commission finalised settlements with four companies that the agency accused of falsely claiming to be certified under the US-EU Privacy Shield framework. This news closely follows the highly anticipated second annual joint review of the controversial data transfer mechanism. 

IDmission LLC, mResource LLC, SmartStart Employment Screening Inc., and VenPath Inc. were slapped on the wrist by the FTC over allegations that they misrepresented their certification. But this is just the latest saga in an on-going debate regarding the Privacy Shield’s fitness for purpose. Only this summer, the European Parliament urged the European Commission to suspend the Privacy Shield programme over security and privacy concerns.

flying airplane

Background and purpose

Designed by the United States Department of Commerce and the European Commission, the Privacy Shield is one of several mechanisms in which personal data can be sent and shared between entities in the EU and the United States. The Privacy Shield framework thereby protects the fundamental digital rights of individuals who are in European Union, whilst encouraging transatlantic commerce.

This is particularly important given that the United States has no single, comprehensive law regulating the collection, use and security of personal data. Rather, the US uses a patchwork system of federal and state laws, together with industry best practice. At present, the United States as a collective jurisdiction fails to meet the data protection requirements established by EU lawmakers.

As such, should a corporate entity or organisations wish to receive European personal data, it must bring itself in line with EU regulatory standards, known as being “protected under” the Privacy Shield. To qualify, companies must self-certify annually that they meet the requirements set out by EU law. This includes taking measures such as displaying privacy policy on their website, replying promptly to any complaints, providing transparency about how personal data is used, and ensuring stronger protection of personal data.

Today, more than 3,000 American organisations are authorised to receive European data, including Facebook, Google, Microsoft, Twitter, Amazon, Boeing, and Starbucks. A full list of Privacy Shield participants can be found on the privacyshield.gov website.

Complaints and non-compliance?

There is no non-compliance. We are fully compliant. As we’ve told the Europeans, we really don’t want to discuss this any further.

—Gordon Sondland, American ambassador to the EU

Although the Privacy Shield imposes stronger obligations than its ancestor, the now-obsolete “Safe Harbor,” European lawmakers have argued that “the arrangement does not provide the adequate level of protection required by Union data protection law and the EU Charter as interpreted by the European Court of Justice.”

In its motion to reconsider the adequacy of the Privacy Shield, the EU Parliament stated that “unless the US is fully compliant by 1 September 2018” the EU Commission would be called upon to “suspend the Privacy Shield until the US authorities comply with its terms.” The American ambassador to the EU, Gordon Sondland, responded to the criticisms, explaining: “There is no non-compliance. We are fully compliant. As we’ve told the Europeans, we really don’t want to discuss this any further.”

Věra Jourová, a Czech politician and lawyer who serves as the European Commissioner for Justice, Consumers and Gender Equality, expressed a different view: “We have a list of things which needs to be done on the American side” regarding the upcoming review of the international data transfer deal. “And when we see them done, we can say we can continue.”

Photo: Ambassador Sondland with Commissioner Jourova in the Berlaymont.
Jourová and Sondland, via a tweet from Sondland saying he was “looking forward to our close cooperation on privacy and consumer rights issues that are important to citizens on both sides of the Atlantic.” 

The list from the Parliament and the First Annual Joint Review [WP29/255] (.pdf) concerns institutional, commercial, and national security aspects of data privacy, including:

  • American surveillance powers and use of personal data for national security purposes and mass surveillance. In particular, the EU is unhappy with America’s re-authorisation of section 702 of the Foreign Intelligence Surveillance Act (FISA), which authorises government collection of foreign intelligence from non-Americans located outside the United States (Remember Edward Snowden and PRISM? See the Electronic Fronteir Foundation’s explanation here)
  • Lack of auditing or other forms of effective regulatory oversight to ensure whether certified companies actually comply with the Privacy Shield provisions
  • Lack of guidance and information made available for companies
  • Facebook and the Cambridge Analytica scandal, given that 2.7 million EU citizens were among those whose data was improperly used. The EU Parliament stated it is “seriously concerned about the change in the terms of service” for Facebook
  • Persisting weaknesses regarding the respect of fundamental rights of European data subjects, including lack of effective remedies in US law for EU citizens whose personal data is transferred to the United States
  • The Clarifying Overseas Use of Data (“CLOUD”) Act signed into law in March 2018 allows US law enforcement authorities to compel production of communications data, even if they are stored outside the United States
  • Uncertain outcomes regarding pending litigation currently before European courts, including Schrems II and La Quadrature du Net and Others v Commission.

 

Image result for max schrems
Max Schrems is an Austrian lawyer and privacy activist. In 2011 (at the age of 25) while studying abroad at Santa Clara University in Silicon Valley, Schrems decided to write his term paper on Facebook’s lack of awareness of European privacy law. His activism led to the replacement of the Safe Harbor system by the Privacy Shield.

What happens if the Privacy Shield is suspended?

In a joint press release last month, the representatives from the EU and USA together reaffirmed “the need for strong privacy enforcement to protect our citizens and ensure trust in the digital economy.” But that may be easier said than done.

In the event that the Privacy Shield is suspended, entities transferring European personal data to the United States will need to consider implementing alternative compliant transfer mechanisms, which could include the use of Binding Corporate Rules, Model Clauses, or establishing European subsidiaries. To ensure that the American data importer implements an efficient and compliant arrangement, such alternatives would need to be assessed on a case-by-case basis involving careful review of data flows, and the controller and processors involved.

Regardless of the method used to transfer data, American companies must ensure that they receive, store, or otherwise use European personal data only where lawfully permitted to do so. The joint statement noted above concluded by saying that the “U.S. and EU officials will continue to work closely together to ensure the framework functions as intended, including on commercial and national-security related matters.”

The European Commission is currently analysing information gathered from its American counterparts, and will publish its conclusions in a report before the end of the year.

Chinese IPRs and Trade Wars

Chinese IPRs and Trade Wars

著作權 or Zhùzuòquán means “copyright” in Mandarin Chinese. Earlier this week, Chinese authorities kicked-off a campaign against online copyright infringement. Is this crackdown a response to increased pressure from foreign investors —and the Trump administration— for China to combat widespread piracy and counterfeiting?

The latest Jianwang Campaign Against Online Copyright Infringement was jointly launched by several government agencies including the National Copyright Administration of China, the Cyberspace Administration, and the Ministry of Public Security. It will target key areas for intellectual property rights (IPRs) including unauthorised republication of news and plagiarism on social media, broadcasting copyrighted content on video sharing apps, and setting up overseas servers to get around territorial restrictions. The campaign, which will last for at least four months, will also push internet service providers to enhance internal supervision systems.

Similar to the crackdown last September, the campaign is seen by many as an attempt to alleviate major concerns among foreign investors, including those in the United States. China’s lack of strong IPRs protection measures “frequently draw complaints from foreign investors and have been a long-standing focus of attention at annual talks with the US and Europe.”

The issue hit headlines again last autumn, when the Office of the United States Trade Representative led an official seven-month investigation into China’s intellectual property theft, under section 301 of the Trade Act of 1974. Bolstered by the USTR’s findings that “Chinese theft of American IP currently costs between $225 billion and $600 billion annually”, the Trump Administration imposed retaliatory tariffs on Chinese products in early July.

Pedestrians strolling past adverts for western companies in Shanghai. Photo: Tomohiro Ohsumi/Bloomberg

Considering 200 years of history: is “Chinese culture” to blame for copyright infringement?

According to the 2017 Situation Report on Counterfeiting and Piracy in the European Union, China has long been recognised as the engine of the global counterfeiting and piracy industry. Whereas software piracy rates for the European Union are 28 per cent, analysts at BSA | The Software Alliance believe nearly 70 per cent of computers in China run unlicensed software.

In 2012, an article on Forbes explained that “IP protection will always be an uphill struggle in China and for companies doing business there,” as individual rights –including IPRs– may be at odds with traditional Chinese society. What support does that argument have?

Firstly, it’s important to note that IP is not an indigenous concept in China. Historically speaking, the lack of a strong IP regime can be traced to the early roots of China’s economic system, which emphasised agriculture and generally neglected large-scale commerce. Before the Opium War (1839-1842), foreign powers were unconcerned with the lack of IP protection in China primarily because there was little foreign investment there to protect in the first instance. Furthermore, the main European exports to China at the time were unbranded bulk commodities, and not technological innovations or creative works such as software, film, and music.

During the Chinese Revolution, Mao Zedong’s Communist Party abolished all legal systems in 1949. Throughout the Cultural Revolution of the 1960s and 1970s, China lacked any semblance of a functioning legal system. As per Communist political ideology, “Law” in China during this time was guided by general principles and shifting policies, rather than detailed and constant rules.

When chairman Deng Xiaoping adopted an open-door economic policy in the late 1970s, China’s trading partners were no longer restricted to the USSR and Soviet satellites, but instead now included Western countries. Several years later, the Communist party officially pronounced that the Cultural Revolution had been a grave error, and began to shift its economic and social reforms. To support its burgeoning and rapid economic development, China accordingly began to embrace a formal IPR strategy. When China joined the World Trade Organisation in 2001, it became bound by the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS).

Enhancing the protection of intellectual property rights is a matter of overall strategic significance, and it is vital for the development of the socialist market economy.

—Li Keqiang, Premier of the People’s Republic of China

The Wall Street Journal further explains that, incentivised by the influx of foreign technology and media companies wishing to invest in China, IPR protection in the country has been rising steadily for the last decade. In 2006, there were approximately 6,000 copyright lawsuits: in 2016, that number had multiplied nearly 15 times over to 87,000 cases.

If Chinese IP law is increasingly comparable to European and American standards, why then does China continue to attract disapproval?  

Although the rate of unlicensed or “pirated” software in China is nearly 70 per cent, the piracy rates in Indonesia, Pakistan, Vietnam, Albania, Belarus, Ukraine, Bolivia, Algeria, Botswana, Zimbabwe and many others is much higher. However, because Chinese economy is behemoth, and uses an incredible amount of software, the value of such pirated software is over $6.5 billion.

Secondly, although true that Chinese IPR enforcement is catching up to U.S. and European standards, considerable weakness remains in the high levels of bureaucracy. For example, court decisions might apply on a provincial level rather than nationally, and judges often have different interpretations of the laws.

a farm in Altay Prefecture, China. 42 per cent of people in China live in rural communities. Photo: @linsyorozuya

Of China’s 1.5 billion residents, nearly 600 million live in rural communities. While central authorities may establish the laws and regulations, it is the local authorities tasked to implement those laws and regulations. It is therefore important to note that local protectionism probably constitutes the largest obstacle to cracking down on piracy in China.

Finally, from a sociological perspective, it could be argued that English-language media promotes an inaccurate portrayal of IP piracy as somehow rooted in Chinese culture and Otherness. To be fair, European and American copyright law is also plagued with intense debate and woeful inadequacies surrounding the evolution of online technologies.

IP is a complex area of law, and for a variety of reasons copyright is perhaps one of the most difficult areas to legislate. China still has a long way to come in respect of is IPR regime, a sentiment acknowledged by Beijing. However, the danger of perpetuating snippets and sound bites without adequate context is non-trivial. IPR policy affects United States foreign policy, and incorrect understanding the problem can lead to disruptions in international relations, or even trade wars.

 

featured image photo of Shanghai: @Usukhbayar Gankhuyag

Social network, media company, host provider, neutral intermediary… what’s in a name for YouTube?

Social network, media company, host provider, neutral intermediary… what’s in a name for YouTube?

Media companies who call themselves social networks will have to recognize that they, too, have to take on responsibility for the content with which they earn their millions.

-— Markus Breitenecker, CEO of Puls4

Who is to blame, if someone records TV programmes and illegally uploads them to YouTube: YouTube, or the individual? According to the Commercial Court of Vienna, YouTube is jointly responsible for copyright breaches from user-uploaded content. Is this einer Entscheidung, die das Internet revolutionieren könnte – a decision that could revolutionize the Internet?

To date, the unanimous opinion of European case law supports the position that YouTube is only a platform, an intermediary, a service provider, a neutral host, and so on – and therefore could not bear the responsibility for stolen content. That’s no longer true, says the Handelsgericht Wien (Vienna’s Commercial Court).

In its judgement of 6 June, the Court handed Austrian TV broadcaster Puls4 a key victory in its four-year legal battle with Google-owned YouTube. In 2014, Puls4 had sued YouTube for allowing Puls4’s stolen content to appear on the YouTube platform. YouTube responded by asserting the Host Provider Privilege set out in Article 14 of the E-Commerce Directive 2000/31/EC, which in certain situations shields host providers from being held responsible for the actions of its users.

The Americans have a similar provision in the Online Copyright Infringement Liability Limitation Act (OCILLA), which forms part of the Digital Millennium Copyright Act. The OCILLA creates a conditional “safe harbor” for online service providers by shielding them for their own acts of direct copyright infringement, as well as from potential secondary liability for the infringing acts of others. In exempting internet actors from copyright infringement liability in certain scenarios,  both Article 14 and the Safe Harbor rule aim to balance the competing interests of the copyright holders, and those who use the content online.

Where YouTube is simply a host provider, it is the individual who uploaded the video in the first instance who is to blame for the theft of copyrighted material. This time, the Court disagreed with YouTube’s argument, and has found finding the media giant to be jointly responsible for the copyright infringement.

So, why should we care about the Puls4 case? Although Austrian case law is not binding for other European Union member states, the Commercial Court’s judgment sets a precedent for denying Host Provider Privilege to YouTube. This may encourage similar decisions in the future which are based on the same line of argument.

Speaking to German newspaper Der Standard, Puls4’s CEO Markus Breitenecker explained that YouTube had effectively abandoned its neutral intermediary position and assumed an active role, which provided it with a knowledge of or control over certain data. In European legislative parlance, this is known as being a false hosting provider or false intermediary.

For years, many of us have assumed that YouTube is just a inanimate platform to which users upload videos. This case underscores that YouTube can no longer “play the role of a neutral intermediary” because of its “links, mechanisms for sorting and filtering, in particular the generation of lists of particular categories, its analysis of users’ browsing habits and its tailor-made suggestions of content.”

Puls4 and YouTube have until early July to petition the court, before it issues its binding ruling. In a statement to The Local Austria, YouTube said it was studying the ruling and “holding all our options open, including appealing” the decision.  In the meanwhile however, YouTube noted that it takes protecting copyrighted work very seriously.

If the preliminary decision is upheld, YouTube must perform a content check upon upload, instead of simply removing copyright infringing content upon notification. In respect of this, the Viennese court stated that “YouTube must in future — through advance controls — ensure that no content that infringes copyright is uploaded.” It is therefore rather timely that YouTube began beta testing a feature called Copyright Match last month, a tool which allows users to scan the platform to locate full re-uploads of their original videos on other users’ YouTube channels.

Screenshot 2018-06-28 at 10.29.54 PM
some Puls4 content is still available on YouTube (at least, here in the UK).

The European Parliament seems to think the arguments about false hosting providers is best left to the courts to decide. Despite the E-Commerce Directive being more than 15 years old, there is no pressing need for a reform. In a recent report on the matter,  the European Parliament’s Committee on the Internal Market and Consumer Protection stated that while false hosting providers may not have been envisaged at the time of the adoption of the E-Commerce Directive in 2000, “the delineation between passive service providers caught by Article 14 and active role providers remains an issue for the court.”

 

 

Now you’re just somebody that I used to know

Now you’re just somebody that I used to know

The GDPR has been in force for less than two weeks, but Europeans have already started to contact companies left, right and centre to exercise their newly enshrined statutory “right to be forgotten.”

However, this right is not absolute, and only applies in certain circumstances. Let’s look at the balancing act between a data subject’s right to have their data erased on the one hand, and an organisation’s need to maintain data for legitimate purposes, on the other.

Organisations (data controllers and processors) are obliged to only collect and use personal data in a lawful manner, as set out in Article 6. There are several types of “lawful processing,” including in instances where an individual grants his or her explicit and informed consent. But lawful processing also covers the use of data for a controller’s legitimate interests, the performance of a contract, or legal obligations, such as fraud prevention. For more on lawful processing, check out my earlier post – Lights, camera, data protection?

With this in mind, it’s important to note that only in certain scenarios does an individual have the right to be forgotten. Under Article 17(1), their data must be either:

  1. no longer necessary for the original purpose
  2. processed based on consent, which is now withdrawn
  3. processed based on the organisation’s legitimate interests, which the individual objects to;
  4. processed for direct marketing purposes, which the individual objects to;
  5. processed unlawfully (in contravention of Article 6);
    or
  6. erased to comply with a legal obligation.

But before an organisation hits “delete” it must see if any purposes for retention apply. In pre-GDPR days gone by, data subjects had to prove they had the right for their data to be erased. The burden now lies with the controller to prove that they have a legal basis for retaining the data. If so, the organisation has a lawful reason to refuse the erasure request. In fact, deleting data when an exemption does apply could be a breach of the law!

The purposes for retention under Article 17(3) are:

  1. the right of freedom of expression and information;
  2. complying with a legal obligation, or for performing a task in the public interest;
  3. for reasons of public health;
  4. for archiving in the public interest, including scientific or statistical research; or
  5. for the establishment, exercise or defence of legal claims.

Additionally, “manifestly unfounded” or “excessive” requests may be refused outright.

From what I’ve seen in practice over the last few days, most erasure requests are made because an individual no longer wants to receive marketing emails. Fair enough: in shifting responsibility onto corporate controllers, the right to be forgotten strengthens individual control. It also signifies public disapproval of entities which process – and, in some instances abuse – enormous quantities of personal information without the explicit consent or knowledge of the individuals concerned.

For those of us interested in the societal and human rights implications (I’m telling you – data protection isn’t just for the techies amongst us!) it’s worthwhile to consider how journalism fits into the picture.

As Oxford’s International Data Privacy Law summarises rather eloquently: The nebulous boundaries and susceptibility to misuse of the right to be forgotten make it a blunt instrument for data protection with the potential to inhibit free speech and information flow on the Internet.

As early as 2012, Reporters Without Borders (formally, Reporters Sans Frontières) criticized the right to be forgotten – then in early draft stages – as a generalised right that individuals can invoke when digital content no longer suits their needs. This runs the risk of trumping the public interest in the information’s availability. RSF also contends that the demand for complete erasure of online content, or the “right to oblivion”, could place impossible obligations on content editors and hosting companies.

EU Commissioner Viviane Reding responded to the criticism from RSF by explaining that the [GDPR] provides for very broad exemptions to ensure that freedom of expression can be fully taken into account.

Note – this post covers the statutory Right to Erasure under Article 17 of the GDPR. Although related, it is distinguished from the recent high-profile cases against Google, in which the English Supreme Court held that a defendant convicted of a crime was entitled to the right to be forgotten, and therefore delisted from Google search results. A more serious offence, with fewer mitigating circumstances, did not attract the same right.

photo © Cassidy Kelley