This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

self-study / Technology

Jul. 18, 2024

What California businesses need to know about the evolving AI legal framework

John Brockland

Partner Hogan Lovells

Vassi Iliadis

Hogan Lovells US LLP

1999 Ave of The Stars Ste 1400
Los Angeles , CA 90067

Phone: (310) 785-4600

Email: vassi.iliadis@hoganlovells.com

UC Berkeley SOL Boalt Hall; Berkeley CA

Roshni Patel

Counsel Hogan Lovells

The past two years have brought significant advances in the field of artificial intelligence (AI). As AI tools become more prevalent and AI capabilities are built into everyday technologies such as word processing software and search engines, regulators, and legislators at both the federal and state level are scrambling to develop rules governing the use and development of AI tools. This has led to a confusing patchwork of federal and state rules, enforcement precedent, and regulatory guidance that companies must somehow turn into an effective AI compliance program. In the absence of clear rules and requirements, many companies may be tempted to hold off developing an AI compliance program.

However, it is becoming increasingly risky for companies to turn a blind eye to how AI tools are being developed and used by their employees. Fortunately, there are steps that companies can take now to lay the groundwork for an AI compliance program that can be more easily adapted to an evolving legal framework.

Legal framework

At the federal level, there is no generally applicable legislation governing the development or use of AI. There are regulations that govern the use of AI in certain sectors, and federal regulators have been clear that existing consumer protection legislation, such as the Equal Employment Opportunity Act and the Fair Credit Reporting Act, still apply to companies using AI tools for activities covered under these laws. Federal regulators also have shown through enforcement that they intend to oversee the use of AI. The Federal Trade Commission (FTC), for example, has taken several enforcement actions against companies that the FTC alleged were engaging in unfair or deceptive acts or practices related to their development of AI technologies.

State legislators have been quicker to get laws on the books. On March 13, 2024, Utah passed the first law requiring companies to inform consumers when they are interacting with a generative AI tool. On May 17, 2024, Colorado passed the first comprehensive AI legislation governing the development and deployment of high-risk AI systems. However, the legal framework is still far from defined. Less than a month after Colorado's AI Act was passed, the Colorado Governor, Attorney General, and a state senator authored a letter confirming that they intend to support amendments to the Colorado AI Act because the law needs "additional clarity" and "improvements." Moreover, more than 40 states have introduced AI bills during the 2024 legislative session, indicating that we are more than likely to end up with a patchwork of state laws governing AI.

Building an AI compliance program

Some companies question where to start when it comes to AI compliance. Other companies have started to implement AI compliance measures, but are hesitant to dedicate the time and resources to develop a full-blown compliance program, given the evolving legal framework in the U.S. However, even in the face of an uncertain legal landscape, companies can take the steps below to develop the foundation for an AI compliance program that can be more easily adapted to evolving requirements:

·Determine your company's AI philosophy: Few companies can effectively implement an outright ban on the use of AI tools and, on the other end of the spectrum, relatively few companies develop AI tools as part of their primary business function. For companies that fall between these two ends of the spectrum, it is important to figure out the company's general approach to AI. Is the company only willing to allow the use of AI tools from reputable third-party providers? Is the company open to adopting any and all AI tools that may increase efficiency? Or is the company somewhere in between and seeking to evaluate each AI tool on a case-by-case basis? Are there any areas in which the company will absolutely not employ an AI tool? Determining your company's AI philosophy can help set the tone and boundaries for an AI compliance program.

·AI governance: All compliance programs require one or more individuals to be responsible for governance. Depending on the company's use of AI tools and AI philosophy, AI compliance measures could fold into an existing compliance program and governance structure or require the designation of new individuals to be responsible for overseeing the company's use of AI.

·Risk assessments: Many states have enacted comprehensive privacy laws that require companies to conduct a risk assessment before engaging in automated decision-making that may have a legal effect on consumers. The Maryland Online Data Privacy Act further requires companies to conduct a risk assessment for each algorithm that is used. It seems inevitable that future laws governing AI will require companies to assess the risk associated with the use of AI tools, particularly if the tools are used to automate decision-making processes related to interactions with consumers. Companies should adapt existing risk assessment processes or implement a new risk assessment process to address the implementation of new AI tools.

·Training: There is a lot of excitement and enthusiasm around adopting new AI tools that employees then learn to use on the job. Companies should consider training employees on the "do's and don'ts" of using AI tools. For example, in most cases, employees should not be inputting confidential information or personal information into AI tools.

·Vendor diligence and contracts: Even if a company is not using AI tools itself, vendors are increasingly using and building their own AI tools. Companies should make sure that their existing vendor diligence process covers risks associated with AI and should enter into contracts that appropriately limit vendors from using companies' data to build new AI tools.

·Transparency: One of the themes in rules governing AI is that transparency is key. Companies should be aware of how AI is being used both internally and externally, and to the extent that a company's use of AI tools may implicate consumer protection issues, the company should make sure required disclosures are made. For example, companies may seek to leverage large stores of customer data in order to build a model to improve a business function. To the extent that customers' personal data is used to build the model, the company should inform customers of that use of their personal data through the company's privacy policy and other privacy notices. Companies should also verify that any use of AI tools aligns with prior public disclosures. For example, if a company has stated that it will never use AI in the recruiting context but later decides to use an AI tool to sort job applications, the company should be transparent in its public statements of that change.

AI is not going away, and now is the time to start incorporating AI into your company's compliance program. To help with this endeavor, California companies should seek counsel from lawyers with a strong understanding of AI technologies and current and emerging AI regulations.

#1511

Submit your own column for publication to Diana Bosetti