This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Law Practice

Oct. 24, 2019

Will artificial intelligence please take the stand (part 2)

In this chapter, we will look at what others have said, or are doing in furtherance of AI testimony, introduces the concept of “machinal knowledge” in relation to court testimony (as opposed to personal knowledge), and then discusses the realities of this next evidentiary chapter in our courts.

Paul F. Rafferty

Paul is recognized as one of the top 20 AI lawyers in California.

In part one of this essay, the feasibility of admitting artificial intelligence testimony in court was questioned, preying upon vague definitional terms within our evidence code that do not recognize (nor seem to prohibit) this form of evidence. This final chapter looks at what others have said, or are doing in furtherance of AI testimony, introduces the concept of "machinal knowledge" in relation to court testimony (as opposed to personal knowledge), and then discusses the realities of this next evidentiary chapter in our courts.

The evidence code may not be the biggest stumbling block to the admission of reliable AI testimony; it may be a human's proclivity for discrimination of non-human intelligence. Decades ago, Hilary Putnam at the Massachusetts Institute of Technology identified this human discriminatory tendency relative to AI. He concluded that discrimination amongst intelligence based simply on the "softness or hardness of the body parts" or whether that intelligence emanates from a human or synthetic organism "is as silly as discriminatory treatment of humans on the basis of skin color." Putnam may have been ahead of his time calling humans out for their AI insecurity. What is it about AI testimony in court that is so unsettling. Is it that AI testimony would be too persuasive so as to create undue prejudice? Possibly. In his article "The Opinion of machines," Curtis E.A. Karnow warned of a "high risk" that juries will view computer systems with even greater authority than human experts based on certain attributes possessed by AI. Or is it that AI data (and hence testimony) could be manipulated unfairly by humans making it unreliable (as though human witnesses don't manipulate their own testimony)? Or, is it just plain old discrimination that will keep AI from testifying in court.

The answers to the above questions aren't clear, but times are changing for AI. Just weeks ago, the U.K. Parliament invited its first non-human witness named Pepper to answer questions on the impact of AI on the labor market. Pepper's testimony was immediately written off by many as a stunt, and critics attacked poor Pepper for lacking "intelligence" and for being unable to formulate ideas by itself. But, neither intelligence nor the formulation of ideas is necessary for the delivery of competent AI testimony. Nonetheless, Parliament's embrace of Pepper is symbolic of the changes to come as AI makes its presence known. Closer to home, in California, Google -- a leader in fascinating developments pertaining to AI, has created "Duplex," which produces AI oral communications that are virtually indistinguishable from humans. Perhaps coincidentally, as of July 1, 2019, California passed legislation deeming it unlawful for AI bots to communicate with a human unless they first disclose that they are not human.

Ultimately, AI's goal is to fit in amongst us by replicating our own thought processes. Humans who define our world seem to agree: The Oxford Dictionary defines "artificial intelligence" as a computer system able to perform tasks normally requiring human intelligence. Webster defines AI as the ability to imitate human behavior, and Britannica holds that AI is a robot that "can perform tasks commonly associated with intelligent beings." Based on these definitions, if human intelligence is the ability to analyze data collected by the senses in order to reach conclusions based on prior experience, then there is simply no difference where AI does essentially the same thing. As California lawyer Robert A. Frietas once posed in his 1985 paper entitled "The Legal Rights of Robots," "(i)f we give rights to intelligent machines, either robots or computers, we'll also have to hold them responsible for their own errors." Frietas' notions have come full circle. If plaintiffs will attempt to hold AI responsible for their own alleged errors, why not give AI rights, such as to defend itself in court?

Someday, California's evidence code may admit testimonial evidence based on both personal knowledge and "machinal knowledge." AI testimony using machinal knowledge can be tested beforehand during discovery and with a battery of hostile motions in limine. Lawyers can fight over the admission of that testimony just as they do now. But, will judges buy it? California Superior Court Judge Alexander Williams III (Ret) had this to say: "Treating AI as a 'person' for purposes of testimony really should not be a problem. It is akin to any other manner of presenting evidence: one qualifies a witness via a process that can readily qualify the AI entity to give evidence; To the extent that the AI evidence source can literally speak for itself, the more the better. In short, I see no problem with qualifying AI as a source of evidence. I think we have to be careful, however, to avoid suggesting that an AI entity is more than that. Such a step would move the consideration quickly from the pragmatic to the spooky!" In short, Judge Williams can embrace the concept of AI testimony, but only as a source of evidence, not as a defendant itself.

Over time, objections based on prejudice will no longer suffice to discriminate against machinal knowledge, and the testimony will come in. The trier of fact will ultimately determine the weight of such testimony and life (synthetic or otherwise) will go on. It is thus for AI lawyers in the present to start thinking now about this next battlefield of evidentiary persuasion, not just in the courtroom, but with their clients as well. Without a client's development of capable AI speech that can accurately express AI's spectrum of "personal" reference, AI's testimony will likely be viewed as manifestly undependable in its interpretation and simply too risky for admission. However, assuming industry can develop this new form of communication, California can be the leader in compelling change to the rules of evidence utilized throughout the country. Acceptance of AI in the courtroom will simply be another milestone for it. After all, California already boasts one of the best regulatory frameworks in the country for the testing and deployment of fully autonomous vehicles. It naturally follows that California will be first to allow AI drivers of fully autonomous vehicles to testify in court.

If AI testimony is admitted in court to help decide liability in automobile accidents, many other interesting developments may soon follow. For example, where AI testifies in court, will opposing AI be allowed to cross examine? If AI can testify, will programmers no longer be able to testify personally about what AI data tells them for fear that their testimony will be excluded as hearsay? Will AI be noticed for deposition? Will a new type of expert witness exist to exclude AI testimony based on incompetency or "old school" product defect? Will this expert even be human? More interestingly, in exclusively AV accidents, will software ultimately exist to decide liability based solely on the testimony (or data) supplied by the AI operating these vehicles? Can superior courts mandate that AV vs AV accident liability be decided using software in order to dramatically simplify trials, and to help reduce our superior court of its tremendous volume of pending civil litigation?

What seems like silliness today is often reality tomorrow. In the original Star Trek series, devices looking very much like cellphones were used by the actors to communicate with each other, and people scoffed at how ridiculous that was. Today, the same communicators that our children now carry around in their back pockets are claimed to possess as much computing power as that which put our astronauts on the moon! AI testifying in court? We'll see. 

#354885

Ilan Isaacs

Daily Journal Staff Writer
ilan_isaacs@dailyjournal.com

Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com