Technology
Jul. 24, 2024
I heard your confession, but was it really you?
See more on I heard your confession, but was it really you?The Federal Rules of Evidence is falling behind in the race to stop voice clones and deep fakes in the courtroom. By Anita Taff-Rice and Susan J. Rose
Anita Taff-Rice
Founder
iCommLaw
Technology and telecommunications
1547 Palos Verdes Mall # 298
Walnut Creek , CA 94597-2228
Phone: (415) 699-7885
Email: anita@icommlaw.com
iCommLaw(r) is a Bay Area firm specializing in technology, telecommunications and cybersecurity matters.
Recorded confessions have mesmerized jurors and served as the bedrock for convictions over the years. Most recently, federal prosecutors used an audiobook excerpt in which Hunter Biden was heard "in his own words" acknowledging drug use apparently during the period when he applied for a gun permit. Hearing Biden's own voice helped seal his criminal conviction.
But in the new age of voice clones and deep fakes, how can a judge or jury be sure that a recording is real, or that a portion hasn't been altered? Generative AI companies are "cloning" voices that create incredibly realistic voice recordings. A few minutes of a real voice is fed into an AI platform, which uses advanced algorithms to analyze the voice and precisely replicate a person's pitch, intonation and vocal quality.
Voice cloning has been used to distribute hateful messages mimicking celebrities and politicians, and to fool bank voice authentication systems. Voice clones may have not made their way into the courtroom yet, but it is only a matter of time.
Voice cloning seems more sinister than AI generated photos or videos, for example, because even the best AI-generated humans still look, well, computer generated. Digital photos include metadata that allows everyday persons, or attorneys, to determine whether the photo may have been manipulated. There is no similar metadata for cloned voices. So, what can the judicial system do to stop fake voice evidence in courtrooms?
The Advisory Committee on Evidence Rules met earlier this year to discuss the regulation of AI-generated evidence:
"A deepfake is an inauthentic audiovisual presentation prepared by software programs using artificial intelligence. Of course, photos and videos have always been subject to forgery, but developments in AI make deepfakes much more difficult to detect. Software for creating deepfakes is already freely available online and fairly easy for anyone to use. As the software's usability and the videos' apparent genuineness keep improving over time, it will become harder for computer systems, much less lay jurors, to tell real from fake."
The Federal Rules of Evidence contain safeguards meant to prevent the introduction of false or fraudulent evidence, requiring foundations to be laid and chains of custody to be established. But if voice clones lack a reliable way to identify them as fake, the FRE seem insufficient to challenge introduction of a clone or deepfake that appears or sounds on its face to have the same, exact characteristics as the real thing.
The committee considered a change to FRE Rule 901(b)(9) to add a new section, (B), that would require a proponent offering evidence generated using AI additionally to describe the software or program used and to show that it "produced reliable results" in the face of a challenge to the evidence.
Alternatively, the committee considered the addition of a new section (C): Potentially Fabricated or Altered Electronic Evidence. "If a party challenging the authenticity of computer-generated or other electronic evidence demonstrates to the court that it is more likely than not either fabricated, or altered in whole or in part, the evidence is admissible only if the proponent demonstrates that its probative value outweighs its prejudicial effect on the party challenging the evidence."
Astonishingly, under this proposal, if a fake vocal recording was found to be more probative than prejudicial, it would be admissible -- even if it is fake. Fortunately, the committee realized that the proposal was unworkable so the rule change did not advance.
Congress is also taking a shot at stopping deep fakes that could provide a useful guide for new evidentiary rules for courts. The bi-partisan Protecting Consumers from Deceptive AI Act is an effort to label content and weed out deepfakes and audio/visual clones. H.R. 7766.
The bill would require the National Institute of Standards and Technology to establish task forces on the development of standards to identify content created by generative AI, and to ensure that any such audio or video content is acknowledged via disclosure as having been created by generative AI. Developers of content created with generative AI would need to include metadata indicating as such in their works, traceable in much the same way a digital photograph's data provides information related to origin and modification.
The same type of AI audit trail and digital stamp should be incorporated into the FRE. Once NIST creates its standards for disclosure, both foundational requirements and chain of custody would have to include those standards in order to be deemed authentic and reliable. Adding to the usual chain of custody requirements, the FRE should include standards and processes for parties to establish the same for any image or voice recording that is purported to be a product of AI, and if needed, require the detailed digital footprint of each step in creation and/or production. In addition, local or state court rules and jury instructions should incorporate standards for identifying and rejecting deep fakes.
Last year, the U.S. Senate considered a bill that would have required the inclusion of a digital watermark on all AI-generated content, but it didn't move past committee. S.B. 2765. Expert opinion is mixed on the likelihood of watermarks being a successful way to combat deep fakes, but the more sophisticated ones, such as adding metadata and crypto signatures are worth exploring.
Before the courts can successfully implement any standards to identify and bar AI frauds or deepfakes, standards need to be in place. So, for now, courts may be stuck with the current authentication/ foundation/chain of custody requirements in the FRE, but the Advisory Committee on Evidence rules should be actively participating in the Congressional process to develop standards and move quickly to incorporate them.
Anita Taff-Rice is the founder of iCommLaw(r), a Bay Area firm specializing in technology, telecommunications and cybersecurity matters. She can be reached at anita@icommlaw.com. Susan J. Rose is a Bay Area attorney focusing on Generative AI, issues as applied to the legal field. She may be reached at susanjrose@gmail.com.
For reprint rights or to order a copy of your photo:
Email
Jeremy_Ellis@dailyjournal.com
for prices.
Direct dial: 213-229-5424
Send a letter to the editor:
Email: letters@dailyjournal.com