This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Labor/Employment,
Technology

Aug. 7, 2024

Emerging legal trends in workplace AI use

See more on Emerging legal trends in workplace AI use

By Warren F. Hodges

Warren F. Hodges

Hanson Bridgett LLP

Phone: (916) 491-3035

Email: Whodges@hansonbridgett.com

See more...

Emerging legal trends in workplace AI use

Artificial intelligence (AI) is transforming the workplace, and both state and federal lawmakers are targeting employers for regulation of its use.


FEDERAL GUIDANCE ON AI USE


In 2023, the Equal Employment Opportunity Commission (EEOC) released a technical assistance document titled "Assessing Adverse Impact in Software Algorithms and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964." This document provides a question-and-answer format to help employers understand the risks of disparate impacts when using AI for employment decisions. 


The EEOC has also made clear that it will bring actions against employers whose AI-based software discriminates against job applicants based on protected characteristics, such as age and disability. In 2023 the agency settled with an employer accused of using an AI tool that rejected older applicants.


The Wage and Hour Division (WHD) has also issued guidance on using technology to monitor work time and ensure accurate pay. AI is increasingly being used to make decisions about reasonable accommodations under the Family and Medical Leave Act (FMLA). The rise of remote work has led to increased electronic monitoring, posing new privacy challenges. While basic forms of monitoring like tracking logins are generally non-problematic, more advanced technologies such as keystroke tracking may be considered for workers handling sensitive information.


In April of 2024, nine agencies issued a "Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems." The agencies warned that they intend to "monitor the development and use of automated systems and promote responsible innovation," and "use our collective authorities to protect individuals' rights regardless of whether legal violations occur through traditional means or advanced technologies."


LEGISLATIVE EFFORTS: NO ROBOT BOSSES AND STOP SPYING BOSSES


Two significant legislative proposals, the No Robot Bosses and Stop Spying Bosses bills, are under consideration. These bills seek to regulate AI and surveillance in the workplace. The No Robot Bosses bill would apply to employers with 11 or more employees and cover decisions related to hiring, firing, and other employment conditions. It includes provisions for whistleblower protections and covers both independent contractors and employees. The Stop Spying Bosses bill shares similar definitions and goals, emphasizing the need for timely and conspicuous disclosure of surveillance activities and prohibiting certain types of data collection, particularly those interfering with union activities.


STATE-LEVEL LEGISLATION: CALIFORNIA'S AB 2930 AND SB 1047


California is leading the way with comprehensive legislation on AI use. AB 2930 is a broad bill imposing vast and onerous compliance requirements upon companies that apply AI tools in making "consequential decisions." If passed, the bill would define a "consequential decision" to include activities related to health care or health insurance, employment considerations, housing determinations, and accreditation processes, to name just a few examples. Companies deploying AI tools for covered purposes would be required to know what data is collected and how it is used, describe the safeguards implemented to protect against "foreseeable risks," assess potential adverse impacts on protected characteristics such as age or sex, and a host of other requirements designed to mitigate the adverse impacts caused by AI tools.


Similarly, SB 1047, titled "The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," proposes regulating AI systems based on their size and computing power, with stringent thresholds. The bill creates sprawling regulations to be enforced by a new state agency. If the bill passes, developers of AI models will be required to submit compliance and safety incident reports. Developers of large AI tools would be required to make a "positive safety determination" before training non-derivative AI models to ensure they do not have the capability to enable harms, including mass casualty events and cyberattacks, as well as cybersecurity protections and a full shutdown capability. The bill also provides whistleblower protections for employees who report violations of the statute.


Finally, the California Consumer Protection Agency, created by the California Consumer Protection Act, has proposed regulations that would impose privacy requirements on businesses covered by the Act (companies with $25 million in annual revenue or that process the personal data of more than 100,000 Californians) using automated decision-making technology (ADMT). The agency defines ADMTs as "anything that uses computation as a whole or part of a system" to "facilitate human decision making." The agency's proposed rules would regulate the use of ADMT in decisions "that produce[] legal or similarly signification effects concerning a consumer," including in "employment or independent contracting opportunities or compensation." Like SB 1047, covered businesses using ADMTs would be required to conduct and submit "risk assessments" of their tools. Businesses would be required to implement notification and opt-out processes for consumers whose data is collected, and respond to consumer requests for information about the company's use of ADMT.


CONCLUSION


As AI continues to evolve, so will the legal frameworks governing its use. The current landscape suggests a strong focus on transparency, user control, and anti-discrimination. Employers must stay informed and compliant with these emerging regulations to harness AI's potential while safeguarding employee rights and maintaining legal integrity.


Warren F. Hodges is counsel at Hanson Bridgett LLP.

#380282

For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com