This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Mar. 1, 2023

CHATGPT & ADVANCED ARTIFICIAL INTELLIGENCE: A POWERFUL TOOL OR ETHICAL MINEFIELD?

See more on CHATGPT & ADVANCED ARTIFICIAL INTELLIGENCE: A POWERFUL TOOL OR ETHICAL MINEFIELD?

Shaun A. Hoting

Keller Anderle LLP

ChatGPT, a natural language processing "artificial intelligence," has rocketed to fame in just a few short months. As of early February 2023, it was reportedly the most downloaded application on the Apple and Google app stores, and has an installed user base of over 100 million active users. ChatGPT and its advanced-AI relatives are different from existing AI in that they now have the ability to create entire, original works based on selected source material and a natural language prompt from the user.

ChatGPT has proven itself remarkably capable. It already passed the same law school and business school tests given to students. It has written full essays on complex topics, and can write programming code based on user requests. Widespread reports of students using ChatGPT to prepare essays has some academic administrators trying to fight fire with fire, using AI to detect essays written by other AIs. Given its meteoric rise in popularity across disparate fields, ChatGPT is the most well known of these natural language AIs; but other advanced-AIs are in the works, and some AIs are already being marketed to litigation attorneys.

The potential benefits to litigators - particularly overburdened ones with looming deadlines - are immediately apparent. AIs like ChatGPT represent a powerful, easy way to prepare everything from case summaries, to discovery responses, to briefs. Facing an impending deadline and a veritable mountain of discovery to respond to, a litigator may decide to use advanced AI to prepare discovery objections and responses. Similarly, given next-gen AI's ability to write summaries and analyses of issues, a litigator may try using one to prepare a case summary or even a section of a brief. But just because litigators can use AI for these tasks, should they? The answer is decidedly mixed.

In the most extreme example, one AI provider sought to have its AI argue a case in traffic court. The provider quickly found itself in legal jeopardy after receiving letters from multiple bar associations, with some threatening referral to the local district attorney's office. On the other extreme, innumerable courts have approved of AI-powered document review (i.e., "predictive coding"), provided the attorney is involved in identifying the "seed documents" that the AI uses to code other documents. Using ChatGPT and its ilk for day-to-day litigation activities like brief writing and discovery responses do not fall neatly into either of these categories, however, and present their own professional responsibility implications.

Advanced AIs implicate the very first Rule of Professional Conduct, Rule 1.1, the duty of competence, which requires attorneys to "to keep abreast of... the benefits and risks associated with relevant technology." Rule 1.1, Comment 1. In other words, if an attorney is considering whether to use an advanced AI, knowing what advanced AIs can do - and what they cannot - is not only good practice, it is an ethical requirement as well.

Attorneys could easily run afoul of their ethical obligations by relying on advanced AI to identify and prepare case summaries or analyses that are then used in briefs filed with the court. The attorney's duty of candor to courts found in the Rules of Professional Conduct, Business & Professions Code, and Federal Rule of Civil Procedure 11 require claims and legal contentions asserted in a filing be warranted by existing law (or a nonfrivolous argument for extending the law). An attorney who trusts an advanced AI to identify and prepare case summaries needs to be aware that the cases may be inapposite, and the summaries incomplete or just wrong. Indeed, some of ChatGPT's answers in other contexts have been described as confident, but wrong. While advanced AI might be able to prepare case summaries and analyses that appear convincing at first blush, that does not relieve the attorney of their ethical obligation to read the cases fully and check the brief carefully to ensure there are no misrepresentations of precedent.

In a similar vein, litigators are not permitted to obstruct their opponents' access to evidence or use means designed to delay the proceedings. Accordingly, attorneys using advanced AI to prepare discovery responses need to ensure that any AI-generated objections are appropriate, tailored to the specific discovery request, and not rote boilerplate. While advanced AI may give a litigator a head-start on preparing discovery responses, the obligation still lies with the litigator to ensure that any objections are being asserted in good faith; though untested, it is unlikely a court would conclude an attorney was acting in "good faith" if it were revealed the AI prepared objections which the attorney did not review.

Most importantly, advanced AIs implicate Rule 1.6, the attorney's duty of confidentiality. ChatGPT, for example, used extensive source material to generate its "database," and there have been reports in the programming context of ChatGPT's output regurgitating the source materials verbatim. And just recently a student was able to trick ChatGPT into revealing information it was not supposed to. If an attorney elects to use an advanced AI for legal research, preparing discovery responses, or even brief writing, the advanced AI could be silently incorporating the information supplied by the attorney into its database. Depending on how the attorney is structuring the legal research or discovery responses when using the advanced AI, the attorney could be revealing confidential client information, unknowingly breaching the client's confidences. More concerning, the advanced AI could deliberately or inadvertently reveal that confidential information to others who later use the AI.

Given these ethical obligations, should attorneys use advanced AIs in their practice? Advanced AIs are like the macros used for formatting a brief or Westlaw's Quick Check (or Lexis's BriefCheck) system. They are a tool - a very powerful tool - to be used carefully with an understanding of what they can and cannot do. But advanced AIs do not supplant the litigator's judgment, skill, or ethical obligations.

Shaun A. Hoting is a partner at Keller/Anderle LLP.

#371400

For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com