This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Criminal

May 1, 2019

The dangers of deepfakes

Daily we are bombarded in cyberspace with fake information, images and videos, and the law has failed to keep pace with the fallout. Recently, new technology has emerged on the internet that blurs reality by allowing users to create “deepfakes” — doctored images and videos that believably map one person’s likeness onto another person’s body.

Rebecca Delfino

Professor, Loyola Law School

Shutterstock

Daily we are bombarded in cyberspace with fake information, images and videos, and the law has failed to keep pace with the fallout. Recently, new technology has emerged on the internet that blurs reality by allowing users to create "deepfakes" -- doctored images and videos that believably map one person's likeness onto another person's body. Although AI-assisted technology may have legitimate uses for artistic expression or scientific research, deepfakes also pose risks that will undermine public confidence in our democratic institutions and cause confusion and chaos. For example, AI-assisted technology could be used to create fake videos of law enforcement officers committing misconduct, political candidates engaging in criminal behavior, or public officials announcing an impending terrorist attack or natural disaster.

Digital impersonation technology, however, is not limited to the realm of governmental actors and public figures. It could be used to create false images involving private individuals. Particularly troubling, AI-assisted technology has been used to create nonconsensual pornography. Even though female celebrities have been predominately targeted in pornographic deepfakes, private individuals may soon appear in face-swapped porn videos. The technology has evolved so quickly that a smartphone application designed to create deepfakes is now available. The proliferation of social media accounts and public-sharing of photographs enables users of the app to gather digital images from any individual's online accounts to create a deepfake. Millions of individuals with digital images posted on the internet are vulnerable to starring in a pornographic deepfake. And unlike public officials and celebrities who have the resources and access to the cyber-public square to correct any misconception created by a deepfake, a private individual has no similar recourse.

As deepfake technology becomes more refined and easier to create, the inadequacy of the law to protect potential victims has also emerged. Because deepfake technology could be used to create realistic pornographic videos without the consent of the individuals depicted, these deepfakes exist in the realm of other sexual exploitative cybercrimes such as "revenge porn." Revenge porn is the distribution of genuine sexually explicit photos or videos without the subject's consent.

Pornographic deepfakes and revenge porn have troubling commonalities. Like revenge porn, pornographic deepfakes predominately affect women, and they both violate the individuals' expectation that sexual activity should be founded on consent. Both revenge porn and pornographic deepfakes can also cause similar harms to the victim's reputation and emotional well-being.

Despite the similarities, pornographic deepfakes and revenge porn differ. Revenge porn typically involves images of private individuals engaged in intimate acts that were intended to remain private, and thus criminal liability has been impose in part because revenge porn violates of the victim's right to sexual privacy. Pornographic deepfakes, in contrast, do not necessarily raise the same sexual privacy concerns. Because deepfakes do not depict an actual person, no one has a privacy concern at stake in a deepfake. Nonetheless, in the case of a private individual whose face was used to create pornographic deepfake, viewers may not realize the video depiction is a fake, the audience may assume that the video is genuine.

In addition, deepfakes also differ from revenge porn because of the challenge in identifying the victims. Revenge porn usually involves easily identifiable victims. Each deepfake video, however, depicts at minimum two people -- the person whose body is shown who may be difficult to identify, and the person whose face has been added. Problems also exist in locating and bringing the perpetrators to justice and in removing the video once it has been published to the internet in light the Communications Decency Act of 1996, 47 U.S.C. Section 230, which shields websites and content distributors from liability for third-party content.

Another significant distinction between pornographic deepfakes and revenge porn is that revenge porn is a crime in 41 states, and has also been the subject of federal criminal legislation--Ending Nonconsensual Online User Graphic Harassment Act of 2017 ENOUGH Act. In contrast, no federal or state laws criminalize the creation or distribution of pornographic deepfakes.

Prosecutors might attempt to prosecute pornographic deepfakes under existing criminal laws, including cyberstalking, criminal threats, the unauthorized access to digital files, or a state's revenge porn statute. These crimes, however, contain elements that are often absent in a deepfakes including, the fear of bodily injury or death, a communication of a threat, or a pattern of conduct. A revenge porn statute may require proof that the perpetrator distributed images of an identifiable person, that the victim had a reasonable expectation of privacy in the depiction, and suffered emotional distress.

Notably, the Malicious Deep Fake Prohibition (MDFP) Act of 2018, which expired at the end of the congressional session last year, would have criminalized the creation and interstate distribution of deepfakes. The MDFP Act, however, is problematic. First, the definition of "deepfake" in the Act is overbroad and vague, including any "audiovisual record created or altered in a manner that the record would falsely appear to a reasonable observer to be an authentic record of the actual speech or conduct of an individual." This definition seemingly encompasses a wide range of media, including legitimate, non-offensive content like computer-generated imagery in films. The MDFP Act also includes an equally expansive exemption for First Amendment speech which would allow nearly every instance of deepfake to be subject to a parody or satire defense. Second, the liability placed on distributors is expansive; it may motivate content providers to automatically remove subject-matter without considering whether it is legitimate content.

Further, the MDFP Act only criminalizes deepfakes if the distribution is done with the intent to violate some other criminal or tort law. By conditioning criminal sanctions on proof of intent to violate another law, MDFP does not adequately protect victims of pornographic deepfakes. The creation or distribution of a pornographic deepfake, standing alone, deserves criminal punishment without requiring intent to violate another law.

Victims of pornographic deepfakes like victims of revenge porn, need concrete solutions for holding creators of content liable, policing distributors, and the removal of the offensive content from the internet. As in the case of revenge porn, criminalizing deepfakes under California law would be a good start -- it would encourage other states to follow suit -- but it will not resolve the problem. Because deepfakes are an outgrowth of the internet, they are not confined to an individual state jurisdiction. The conduct, therefore, must be addressed through federal criminal law specifically directed at pornographic deepfakes. The proposed criminal statute should combine the relevant, narrowly drawn elements of the proposed federal legislation criminalizing revenge porn, and deepfakes with new elements, including criminal injunctions, victim restitution, and targeted exemptions for subject matter protected by the First Amendment. Solutions will also need to emerge outside the legal context to train law enforcement and the judiciary. These solutions might include developing effective means of technologically verifying content and a public education campaign in digital literacy to train people to become more adept consumers of online content.

#352298


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com