This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.
This MCLE has expired.

Intellectual Property,
Civil Litigation

Apr. 18, 2018

You won’t believe your eyes

We have entered a time when documented photographic or video evidence is not unassailable proof; it is not uniformly authentic, nor does it necessarily convey fact or truth.

Melanie J. Howard

Partner
Loeb & Loeb LLP

10100 Santa Monica Blvd Los Angeles Ste 2200
Los Angeles , California 90067

Email: mhoward@loeb.com

Melanie is based in the firm's Los Angeles office. Ms. Howard is chair of the firm's Intellectual Property Protection group and represents clients from a variety of industries, including advertising, automotive, technology, entertainment, media and retail.

See more...

You won’t believe your eyes
(Shutterstock)

This column appeared in the April 18 TOP INTELLECTUAL PROPERTY LAWYERS special report.

When she thought it over afterwards, it occurred to her that she ought to have wondered at this, but at the time it all seemed quite natural.

-- Lewis Carroll, "Alice's Adventures in Wonderland & Through the Looking-Glass"

Perhaps we don't wonder enough about what we see. As technology has advanced, we have become conditioned to rely upon certain types of media as per se evidence of fact or truth, not to pause to question their veracity and authenticity. It is time to start wondering again.

Artificial intelligence and deep learning are being leveraged with face-swapping software to create videos that convincingly portray celebrities, politicians and non-public figures as doing or saying things they never did or said (nicknamed "deep fakes"). Previously limited in its application and accessibility due to the amount of computing power needed to process the digital manipulation, this technology is increasingly available to anyone with access to moderately powerful computers and/or cloud computing applications. The popular FakeApp application is free to download and uses deep-learning neural networks and face mapping software to detect patterns between two subjects' faces, based upon photographs of the individuals. The more images provided, the more realistic the final outcome.

The traditional arsenal of intellectual property and personality rights claims is unlikely to prove effective in combatting deep fakes.

While there are numerous benign uses of AI-enabled face-swapping software, the most popular use of FakeApp currently appears to be AI-generated fake pornography or non-consensual porn, which involves superimposing the face of a celebrity onto the body of a porn actor. And it is not difficult to imagine additional malevolent uses.

What legal recourse is available for those victimized by an AI-altered video? As in other areas where the technology has progressed more rapidly than the law, existing laws provide an imperfect toolbox to address the resulting harms.

Deep Fakes: Infringing, Commercial, False or Transformative?

The traditional arsenal of intellectual property and personality rights claims is unlikely to prove effective in combatting deep fakes. Copyright infringement claims -- often relied upon by content owners to effectuate expeditious removal of unauthorized content from online platforms -- may be difficult to assert in the context of "deep fakes." Even if there is a clear violation of one of the exclusive rights in Section 106 of the Copyright Act (such as unauthorized reproduction of a copyrighted photo of the victim), the likely complainant is the victim whose face is depicted in the video and that individual is unlikely to be the owner of the copyright in the video, nor aligned in interests with the copyright owner. Right of publicity statutes (currently enacted in 22 states) generally prohibit unauthorized commercial uses of an individual's likeness or other indicia of identity. However, with respect to deep fake "hobbyists", who aren't necessarily enjoying any commercial benefit as a result of the video, the "victim" will likely have difficulty in succeeding on the merits of a publicity rights claim.

Tort law presents more promising alternatives. Videos that are verifiably false, or depict the victim as making false statements, could be actionable as defamation. In order to be successful on this type of claim, and depending upon the content of the fake video, the victim may need to assert that the altered video is itself the false statement of fact. Similarly, in states where an individual may sue for false light invasion of privacy, the complaining party could succeed on a claim if he or she can prove that the false statements are important, and that the false light would be offensive to a reasonable person. In the context of defamation, and in most jurisdictions where false light is available, there is a heightened standard required for claims by public figures: demonstrating malice on the part of the speaker, which is either knowledge that the statement is false or a reckless disregard for the truth.

However, as the recent case of de Havilland v. FX Networks, LLC, 2018 DJDAR 2778 (Cal. App. 2nd Dist. March 26, 2018), underscores, there is significant First Amendment protection for expressive works that are transformative, even though those works may be insulting, unflattering or embarrassing to the individual. This protection could shield creators of "deep fakes" from liability for right of publicity violations and other tort claims if the creator establishes that the value of the video comes from the creativity, skill, and reputation of the creator rather than the fame of the individual whose indicia of identity are used in the video.

Is It a Crime?

As yet untested in the context of "deep fakes," there are existing criminal statutes that arguably proscribe this type of conduct. At least 38 states have "revenge porn" statutes that provide criminal penalties for disseminating without the subject's consent photos or videos featuring someone partially or completely naked, some of which are worded broadly enough to potentially encompass the publication or dissemination of deep fakes. For example, Virginia's statute (Va. Section 18.2-386.2) makes it a Class 1 misdemeanor to engage in the unauthorized dissemination of videos or still images "created by any means whatsoever" depicting a person nude or in a state of undress. The difficulty in applying these laws in the context of deep fakes is that the victim whose face is pictured in the video is not the same "person" whose body is depicted undressed in the video.

Several states also have enacted legislation criminalizing online impersonation, among them New York, California and Texas. California's "False Personation and Cheats" statute prohibits impersonating someone "through or on an Internet Web site" or by "other electronic means" (defined to include "opening an e-mail account or an account or profile on a social networking Internet Web site in another person's name") for the purpose of harming, intimidating, threatening or defrauding another person. Under this law, which also provides a private right of action, an impersonation is credible "if another person would reasonably believe, or did reasonably believe, that the defendant was or is the person who was impersonated." (Cal. Pen. Code Section 528.5.) If found guilty, the defendant could face monetary fines or jail time.

Although some have voiced concerns as to whether these types of statutes leave enough space for parody, political satire and other protected forms of expression, these laws do signal that states are getting serious about protecting individuals' identities online.

Expanding Notions of Privacy

As daily life becomes virtually inseparable from electronic media, definitions of personally identifiable and other sensitive information are expanding to keep pace. Several states have already begun legislating privacy rights for biometric data, and this trend will likely result in the availability of additional legal claims arising out of the misuse of face prints, voice prints and other biometric data. The Illinois Biometric Information Privacy Act, which provides a private right of action, prohibits a "private entity" (essentially, nongovernmental and nonjudicial actors) from capturing, collecting, selling, trading, disclosing or disseminating a person's biometric identifier ("a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry") without that person's informed, written consent. Although the technology underlying FakeApp and other similar programs may not fit squarely within this statute's list of proscribed activities, this type of legislation is a notable development, as it recognizes the importance and uniqueness of biometric data and lack of traditional remedies to address the potential harms and wrongs that could result from its misuse.

More Questions than Answers

The important thing is not to stop questioning. Curiosity has its own reason for existing."

-- Albert Einstein

We have entered a time when documented photographic or video evidence is not unassailable proof; it is not uniformly authentic, nor does it necessarily convey fact or truth. Courts have already begun addressing the unique challenges presented in connection with the authentication of digital evidence in judicial proceedings. See, e.g., "Authenticating Digital Evidence," 69 Baylor L. Rev. 1 (2017). Lawyers should also consider a new approach to substantiating videographic evidence, conducting additional due diligence, questioning the provenance and authenticity of such evidence, and, as they become available, investing in technological tools that can detect altered videos (and forgeries in other types of new media). Efforts are already underway to develop these types of countermeasures; as just one example, Gfycat is reportedly using its own AI-enabled tools to expose and remove deep fakes from its online platform.

But, in the meantime, if it looks too good to be true, consider that it probably is. Be curious. Start wondering.

#347100


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com