Access the article at the Journal of Intellectual Property Law & Practice. An earlier version of the paper is available for free at SSRN.
In 2017, a machine learning algorithm was published online as a tool to insert faces of celebrity actresses into pornographic videos. This “deepfake” phenomenon has since spread across social media, and is no longer confined to sexual contexts. The technology can be used to swap faces in film scenes, or even digitally insert people into their favorite movie clips. Although the results are often comical, deepfake sophistication and realism has rapidly improved over the last several years, making them difficult to spot as fake. There is a growing concern that such videos could be used to extort, intimidate, or otherwise defame an individual. In such instances, could the victim portrayed in the deepfake bring a lawsuit against its creator?
In California, perhaps. There, a person has a statutory and common law “publicity right”, which is a cause of action used to prevent or penalize any misappropriation of one’s image, photograph, or voice. By contrast, the lack of a recognized image right under English law can be a source of frustration amongst claimants, and debate amongst lawyers. Drawing upon her knowledge of English and Californian law, the author explores whether or not California’s codified publicity right is superior to that of the English piecemeal approach, using the deepfake phenomenon as a case study.
December 1, 2020