This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Law Practice

Oct. 4, 2019

Will the artificial intelligence please take the stand

As fully autonomous vehicles stuffed with Artificial Intelligence (“AI”) ready to take to our roadways and commandeer the lives and imaginations of Californians, it will soon be time for trial lawyers to defend these wondrous inventions from the expected onslaught of those who will test their competency.

Paul F. Rafferty

Paul is recognized as one of the top 20 AI lawyers in California.

Shutterstock

As fully autonomous vehicles stuffed with artificial intelligence ready to take to our roadways and commandeer the lives and imaginations of Californians, it will soon be time for trial lawyers to defend these wondrous inventions from the expected onslaught of those who will test their competency. Conventionally, a human operates an automobile. In the event of an accident, the human is put on trial for negligence. At trial, the human driver explains the basis -- assuming there is one -- for the driver's actions which, by way of example, led to the untimely death of a pedestrian standing on a sidewalk. The human driver explains that he or she swerved to avoid striking three children running across the road, and struck the pedestrian instead. The human is then assailed by the decedent's lawyer for inattentiveness, excess speed, texting, changing the station on the radio, ineptness, contact lens wear, sun glare, and so on. Chances are, the driver will be found negligent under any one of these theories regardless of the prudent decisions made on that fateful day.

Finding AI negligent under the same circumstance will be more difficult. The AI will present a potent defense, backed by statistics, 100% attentiveness, and a logical explanation for why killing that pedestrian was prudent. Would a trier of fact reject the AI's reasoning? Not likely. In a 2015 study published by Science Magazine, authors Bonnefon, Shariff and Rahwan found that 76% of the human participants thought it more moral for AI to sacrifice one human rather than kill a bunch of them. That percentage is pretty good odds for a defense verdict if the jury is similarly composed. But how will the trial lawyer put its AI witness on the stand when the AI isn't a person. You see, AI has no "personal knowledge" upon which to testify, as that term is traditionally defined. AI is an "it," and our evidence code does not like that.

Trial lawyers should already be working with their autonomous vehicle clients to create a robust internal framework to document the development and rationale behind the AI used to operate their autonomous vehicles so that, when the time comes, autonomous vehicle manufacturers can defend themselves using human witness testimony. But, that is rather boring in the new age of autonomous vehicles. Why not let AI testify? Picture this witness at trial: A three dimensional AI holograph that will testify on its own behalf, in non-leading fashion, explain the decisions prudently made on the fateful day it flattened the pedestrian, and then fend off cross examination from a frustrated opposing counsel who simply cannot out-think it. Call the AI something cute, like "Norman," defeat the expected 352 objection(s) for making the holograph too beautiful or engaging, pull the right jury, 76% of which will trust Norman's judgment, and win the case. Sounds good, but can current law and procedure recognize this form of evidence?

California Evidence Code Section 702 states "the testimony of a witness concerning a particular matter is inadmissible unless he has personal knowledge of the matter." "He" would seem to rule out "it," but then again, Evidence Code Section 9 states that the masculine gender "includes the feminine and neuter." "Personal knowledge" means knowledge of a circumstance or fact gained through firsthand observation or experience. Section 702(b) thus permits a "witness'" personal knowledge of a matter to be shown by otherwise admissible evidence ... including testimony. But, must "personal" knowledge belong to a person? Perhaps not. Section 175 also defines a "person" as a firm, organization, or corporation (these are "its"), and Section 225 recognizes a "statement," whether oral, verbal, or nonverbal, from a "person." AI may soon have the capability to testify in court with documented, personal knowledge of a circumstance or fact gained through firsthand observation and experience. Why can't AI testify in court in furtherance of Section 702(b)?

This isn't the end of the difficulty to permit Norman to testify. There are also those who argue that AI must be able to think for itself in order to testify, and further argue that AI merely responds to data input. However, there are mounting arguments that even this may not be true, and if competent evidence is provided, does that even matter? For example, many exceptions to California's hearsay rule allow admission of statements of others observed by a testifying witness. The witness isn't engaging in thought by offering this otherwise inadmissible evidence. Why require AI to think about what it has observed as a basis for exclusion of probative evidence? Isn't the issue of greater importance whether that evidence is reliably produced? And, who says AI cannot think. In a recent article about Google's AlphaGo game played by AI against 18-time Go world champion Lee Sodol, the AI utilized a move not programmed by its creators. To many, that sounds eerily like thought.

Perhaps humans are simply too afraid to accept that AI can (or will soon) exhibit human-like thought processes based on the highly complex interaction of its algorhythms. Fear has long since fostered discrimination -- are humans soon to discriminate against AI? In Part 2 of this essay, we'll explore further what others have said, or are doing in furtherance of AI testimony, introduce the future of "machinal knowledge" (as opposed to personal knowledge), and then discuss the realities and benefits of this possible evidentiary evolution in our courts. 

#354613

Ilan Isaacs

Daily Journal Staff Writer
ilan_isaacs@dailyjournal.com

Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com