Regulating the Raunchy? A look at free speech and obscenity under Miller v. California

Regulating the Raunchy? A look at free speech and obscenity under Miller v. California

One of the most interesting aspects of being a technology lawyer is that it necessarily requires a strong understanding of Internet regulation and digital rights, including the right to express yourself online.  As such, free speech is one of my favourite areas of legal history and theory.  Coincidentally, two major US Supreme Court cases regarding free speech were decided on this day —  21 June!

This post takes a look at one of them: Miller v. California [1973].  In a later post, I’ll explore a second landmark free speech case decided on 21 June: Texas v. Johnson [1989].

The Constitution in Court.  

Most people know that the First Amendment of the US Constitution protects freedom of speech. However, it’s actually a bit more complicated than many would guess. In its entirety, the First Amendment says:

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

Putting the aspects regarding religion, assembly, and petitions to one side, what this Amendment essentially does is prohibit the government from prohibiting freedom of speech. But what does that look like in practice?

Of course, we cannot travel back in time to 1789 to ask James Madison what he meant when he drafted the Bill of Rights. Instead, American Courts have over time developed various methodologies to apply modern facts to something written 230 years ago.

Image result for free speech protest

Miller v. California – to what extent can the government regulate porn, and why should we care?

The case of Miller v. California, 413 U.S. 15 (1973) concerns pornography and whether or not the government is allowed to regulate obscene material. Marvin Miller was the owner/operator of a California mail-order business specializing in pornographic films and books. When his company’s brochures were sent to and opened by a restaurant owner in Newport Beach, California, the restaurant owner called the police. Miller was subsequently arrested and charged with violating California Penal Code § 311.2, which is paraphrased below:

Every person who knowingly sends into California for sale or distribution, or in this state possesses, prepares, publishes, with intent to distribute or to exhibit to others, any obscene matter is guilty of a misdemeanor.

The jury at Miller’s trial in State court had been instructed to consider the pornographic materials in question, and determine if they were “obscene.” The jury decided that they were, and Miller was found guilty. Because he objected with the way in which the jury had arrived at this conclusion, he appealed the decision to the Supreme Court.

Although the Supreme Court ultimately vacated the earlier jury verdict and remanded the case back to the California Superior Court, the matter became a landmark decision and the basis for what is now known as the Miller Test.

Writing the majority opinion, Chief Justice Burger reaffirmed in Miller that obscenity can be regulated by the government, because it is “unprotected speech.” Referring to Roth v United States (1957) and other similar cases, Justice Burger explained that obscenity was not within the area of constitutionally protected freedom of speech either under the First Amendment, or the Due Process Clause of the Fourteenth Amendment. “In the light of history,” Justice Brennan had said in Roth, “it is apparent that the unconditional phrasing of the First Amendment was not intended to protect every utterance.”

Venus-in-the-Cloister
Legal Fun Fact:  The first conviction for obscenity in Great Britain occurred in 1727. Edmund Curll was convicted for publishing erotic fiction titled “Venus in the Cloister or The Nun in her Smock” under the common law offence of disturbing the King’s peace. 

Now that we are clear that the First Amendment does not protect obscenity, the next question is obviously therefore: what is obscenity?  

In Miller, Justice Burger acknowledged the inherent dangers of regulating any form of expression, and said that “State statutes designed to regulate obscene materials must be carefully limited.” As a result, the Supreme Court was tasked with confining “the permissible scope of such regulation to works which depict or describe sexual conduct.”

This brings us to Burger’s three-part test for juries in obscenity cases. Obscenity is now defined as something: (1) the average person, applying contemporary community standards, would find appeals to a prurient interest; (2) which depicts or describes, in a patently offensive way, sexual conduct; and (3) whether the work lacks serious literary, artistic, political, or scientific (or “SLAPS”) value. In short, obscenity must satisfy as the prurient interest, patently offensive, and SLAPS prongs.

The Miller test changed the way courts define obscenity, and accordingly, what does – or does not – deserve protection as “free speech.”  

This Miller obscenity test overturned the Court’s earlier definition of obscenity established in Memoirs v Massachusetts (1966). In Memoirs, the Court had decided that obscenity was material which was “patently offensive and utterly without redeeming social value.” Furthermore, the Memoirs decision made clear that “all ideas having even the slightest redeeming social importance have the full protection of the guaranties [of the First Amendment]”.

By adopting the Miller decision, the Supreme Court departed from Memoirs in favour of a more conservative and narrow interpretation of the types of speech which qualify for First Amendment protection. Rather than considering obscenity as simply that which is “utterly without redeeming social value” of any kind, obscenity is now a subjective standard. This offers wider discretion to State legislatures and police agencies, as well as prosecutors and jurors, to decide whether material is “obscene” under local community standards.

Not everyone agrees!  Unsurprisingly, the Miller decision was a narrow one, and split the Court 5-4.

justices
Chief Justice Burger wrote the majority opinion, with Justice Douglas penning the dissent.

Justice William O. Douglas wrote the dissent and, at the risk of sounding like a total legal geek, I highly suggest taking a quick read of it! One of my favourite excerpts is as follows:

The idea that the First Amendment permits government to ban publications that are “offensive” to some people puts an ominous gloss on freedom […] The First Amendment was designed “to invite dispute,” to induce “a condition of unrest,” to “create dissatisfaction with conditions as they are,” and even to stir “people to anger.” The idea that the First Amendment permits punishment for ideas that are “offensive” to the particular judge or jury sitting in judgment is astounding. 

Nevertheless, despite the dissent and criticism, the Miller test remains the federal and state standard for deciding what obscene. However, the rise of the Internet has complicated matters, not least because the concept of “community standards” is difficult to define given how interconnected we are today.

What do you think? After nearly 50 years, should the Supreme Court reconsider what “obscenity” means? Is the Miller Test due for an update?

Have European laws improved American privacy protections?

Have European laws improved American privacy protections?

The European Union’s landmark data privacy law, the General Data Protection Regulation (GDPR) went into effect one year ago this week. By now, the implications for European residents and companies are fairly well known. Many of us will have received updated privacy policies in our email inboxes, or become increasingly aware of headline-grabbing stories on mass data breaches. But what about beyond the borders of Europe? Has GDPR changed the way in which data protection and privacy matters are viewed in the United States? 

The first thing to consider is whether GDPR has the power to influence how American companies handle data. The answer is yes. The GDPR is a single legal framework that applies across all 28 EU member states – including, for the time being, the United Kingdom. But in a considerable departure from the old Data Protection Directive (95/46/EC), the GDPR imposes an expanded territorial scope beyond the EU itself. No matter where they are located around the world, companies must comply with the GDPR if they either offer goods or services to European residents, or monitor their behavior (see, inter alia, Recital 22).

These new regulations are not without teeth. Whereas fines under the previous directive generally maxed out at £500,000, fines under GDPR can reach up to 20 million euros or 4% of a breaching company’s global turnover. Accordingly, from 25 May 2018, many American companies became subject to European privacy laws for the first time, and faced considerably enhanced sanctions for noncompliance.

As a result, in the lead-up to GDPR taking effect, many Europeans were geo-blocked from accessing American websites. The reason? If European customers were blocked from accessing the websites, the companies would not technically be “offering their goods or services” to Europeans, nor would they be “monitoring their behavior”.

Although the majority of companies retreating from Europe were small to medium-sized technology companies, others included global names such as the Los Angeles Times (US small businesses drop EU customers over new data rule, Financial Times).

dims.jpg

The other approach taken by US companies was to move data centres and servers from Europe to the United States. Facebook made headlines by shifting data concerning more than 1.5 billion users from Ireland to its main offices in California. Although Facebook told Reuters that it applies “the same privacy protections everywhere, regardless of whether your agreement is with Facebook Inc [California] or Facebook Ireland,” representatives from the social media giant noted that “EU law requires specific language” in mandated privacy notices, whereas American law does not.

Has the GDPR made Europe “too chilled” for American tech companies? It is important to note that users impacted by Facebook’s server relocation mentioned above were non-EU users. Furthermore, the data migration does not release Facebook from its obligation to comply with the GDPR, insofar as European users are concerned. Nevertheless, the relocation underscores the point that the United States is often seen as a more friendly home for companies seeking fewer, less stringent privacy regulations.

Several companies which initially fled the long-armed reach of the GDPR have returned to Europe, albeit with significantly changed privacy notices and data protection practices. However, many have stayed away. Some privacy advocates will hail the departure of American tech companies who are unwilling to comply with the new privacy rules. But while it is true that privacy protection is an important and fundamental human right, it cannot be ignored that an increasing body of evidence suggests the GDPR has had a chilling effect on a wide variety of overseas companies.

According to a recent study by the Illinois Institute of Technology and the National Bureau of Economic Research, there has been an 18% decrease in the number of EU venture deals and a 40% decrease in the dollar amount per deal following GDPR implementation (The Short-Run Effects of GDPR on Technology Venture Investment).

Together with increased European regulations of the digital economy on the whole, it is arguable that lawmakers in Brussels are making it more difficult for American companies to enter the European market. Even for those that decided to remain in the EU despite the enhanced regulations, their future remains uncertain.

Will the GDPR inspire privacy laws in the United States? Given that US companies – even those located in America – must now play by European privacy rules in order to reach the EU market, it is arguable that various technology and media entities will start to impose tougher privacy standards on themselves. Such self-regulation is likely to be welcomed by technology professionals and corporate insiders, who may consider themselves better positioned than regulators and lawmakers to tackle the problems of privacy in a digital age. However, as we have seen in sectors ranging from pharmaceuticals to finance, self-regulation often falls short when it comes to consumer protection.

 

zb
In April 2018, Facebook founder Mark Zuckerberg was called before the US Senate to answer questions over Facebook’s responsibility to safeguard user privacy and the Cambridge Analytica scandal.

For a variety of reasons which fall beyond the scope of this post, the privacy laws of the United States have developed in an ad hoc fashion. Apart from the Children’s Online Privacy Protection Act (COPPA) and the Health Insurance Portability and Accountability Act (HIPPA), few national laws exist to protect data privacy.

Instead, in the United States, companies are caught under different laws depending on which State they are headquartered in, or where they do business. Any applicable federal laws which touch on data privacy are most often to regulate specific industry sectors, such as health insurance mentioned above. Even in the wake of the Equifax data breach of summer 2017 – which affected over 145 million US consumers – attempts to improve consumer privacy protections have failed to pass in Congress.

Despite the lack of federal legislation, some American states are using their powers to pass laws at a more local level. One such state is California, which happens to boast both the world’s fifth largest economy, as well as one of the most impressive technology industries. Last year, California Governor Jerry Brown signed the California Consumer Privacy Act (CCPA) into law.

While at only 12 pages the law is a far cry from the obviously more comprehensive GDPR, it does grant California consumers specific rights over their personal information held by companies. Perhaps most interestingly, because the CCPA applies to any company which does business with California residents, the law will likely have a major impact on the privacy landscape across the country.

This begs the question: if the United States is in need of enhanced privacy protections, who should spearhead the endeavour? The US federal government via Congress, state legislators, or companies themselves? Some believe consumers will be better protected if Congress resists the temptation to intrude at federal level, to allow the states to experiment with their own legislation.

As we have seen in Europe, it is abundantly clear that any single privacy framework must be both flexible, as well as scalable, across a variety of industry sectors, geographies, and company types. To add to the political complexity, powerful industry players will likely lobby for special exceptions, and various federal agencies may clash over who will enforce any such regulation(s).

In conclusion, it is safe to say that the GDPR has indeed changed the way in which data protection and privacy matters are viewed outside of Europe. But the direction with which the Americans will choose to take it remains unclear.

On the one hand, some American companies have retreated from the EU. On the other, local governments have begun to take consumer privacy more seriously, by introducing new domestic data protection legislation. To find a balance between the two forces of economic enterprise and regulatory powers may be difficult. More likely, there may be a push and pull effect; whether privacy will prevail is yet to be seen.

DeepFakes and False Lights: what does the law say?

DeepFakes and False Lights: what does the law say?

What do Scarlett Johansson, cyber intelligence experts and some law makers have in common? Their shared concern about AI-generated videos. Known as “DeepFakes,” these videos can have damaging impact on reputations, emotional health, and even national security. But what is the legal status of this disruptive – and oftentimes disturbing – technology?

Deepfake – which combines “deep learning” and “fake” – is commonly defined as an artificial intelligence-based human image synthesis technique. Put simply, it’s a way to superimpose one face over another.

In December 2017, an anonymous Reddit user started a viral phenomenon by combining the machine learning software and AI to swap porn performers’ faces with those of famous actresses. Scarlett Johansson, one of the most highly-paid actresses in Hollywood, has herself been the victim of such “creations”. Speaking to the Washington Postshe explained that “nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired. There are basically no rules on the internet because it is an abyss that remains virtually lawless.”

It goes without saying that such fake porn videos can easily damage careers, emotional well-being, and a person’s sense of dignity and self-esteem. But there are other implications, too.

As a general starting point, it’s useful to have an understanding of what AI is – and isn’t. “Artificial Intelligence” is not another word for the robot overlords in Blade Runner or even Skynet’s Terminators. Rather, AI is fundamentally a machine-learning application whereby a computer is to fulfill a certain task on its own. What makes AI special is that machines are essentially “taught” to complete tasks that were previously done by humans, by doing the task over and over again.

With deepfakes, it doesn’t take long for the AI to learn the skill with eerie precision, and produce sophisticated (albeit artificial) images. The technology has many legitimate uses, especially in the film industry, where an actor’s face can be placed on their stunt double’s body. But thanks to continued advancement in the technology itself, the political and legal risks are higher than ever before.

On 29 January, US Director of National Intelligence Dan Coates spoke before the Senate Select Committee on Intelligence to deliver the Worldwide Threat Assessment, which had been compiled by the US intelligence community. The document sets out the biggest global threats in the following order: cyber, online influence operations (including election interference), weapons of mass destruction, terrorism, counterintelligence, emerging and disruptive technologies. 

Yes, cyber attacks and online influence operations are discussed before traditional weapons of mass destruction. The report even mentions deepfakes explicitly:

Adversaries and strategic competitors probably will attempt to use deep fakes or similar machine-learning technologies to create convincing—but false—image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners.

Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, explained that “we already struggle to track and combat interference efforts and other malign activities on social media — and the explosion of deep fake videos is going to make that even harder.” This is particularly relevant given the severe political polarization around the world today: from Brexit to Trump and everywhere in between, deepfakes could become powerful ways to spread more disinformation and distrust.

There are some legal remedies which may combat some of the more nefarious aspects of the deepfake. As explained by the International Association of Privacy Professionals (IAPP), in common law jurisdictions like the United States and the United Kingdom, the victim of a deepfake creation may be able to sue the deepfake’s creator under one of the privacy torts. By way of example, the false light tort requires a claimant to prove that the deepfake in question incorrectly represents the claimant, in a way that would be embarrassing or offensive to the average person.

Another potentially relevant privacy tort is that of misappropriation or the right of publicity, if the deepfake is used for commercial purposes. Consider, for example, if someone made a deepfake commercial of Meghan, the Duchess of Sussex endorsing a certain makeup brand. Since individuals generally do not own the copyright interest in their own image (i.e., the photograph or video used to make a deepfake) copyright law is not a good remedy to rely upon. Instead, Meghan could argue that the deepfake misappropriated her personality and reputation for someone else’s unauthorised commercial advantage. However, it’s important to note that personality rights are frustratingly nebulous here in the United Kingdom, as I explained in Fame and Fortune: how celebrities can protect their image

Depending on the nature of the deepfake, a victim may also be able to sue for the intentional infliction of emotional distress, cyberbullying, or even sexual harassment. But in many instances, the burden of proof to establish these claims can be a notoriously difficult standard to meet.

Furthermore, the practical challenges of suing the creator of a deepfake are considerable. Firstly, such creators are often anonymous or located in another jurisdiction, which makes legal enforcement very difficult. Although a victim could request that the creator’s internet company (ISP) remove the deepfake, establishing what is known as “online intermediary liability” and forcing an ISP to get involved can be an uphill battle in and of itself (this was the topic of one of my papers in law school). As for the victim exercising their right to be forgotten under the EU’s General Data Protection Regulation (Article 17, GDPR), the same problem arises: who is responsible for taking down the deepfake?

Secondly, the creator may lodge a defense of free speech or creative expression, especially if the deepfake victim is a political figure or otherwise in the public spotlight. This may beg the question, to what extent is a deepfake depicting a member of parliament any different from a satirical cartoon or parody? Unless the deepfake is outrageously obscene or incites actual criminal behaviour, it may be nearly impossible to take legal action.

Deepfakes are but one of many instances where the law has not quite kept up with the rapid development of new technology. Although issues like these keep technology lawyers like myself employed, the potential for genuine harm caused by deepfakes in the wrong hands cannot be overstated. It should be fairly clear that outlawing or attempting to ban deepfakes is neither possible nor desirable, but perhaps increased regulation is a viable option. Deepfakes could be watermarked or labelled before being shared by licensed or regulated entities (for example, news organisations) much in the same way that airbrushed models in advertisements are labelled in France. Doing so may at least slow down the proliferation of deepfakes purporting to be genuine.

But until then, the only advice remains that you shouldn’t believe everything you read – or see, or hear – online.

Update 14 June 2019: The European Commission has released a joint Report on the implementation of the Action Plan Against Disinformation – available here.