This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Aug. 2, 2023

The use of artificial intelligence in the workplace

See more on The use of artificial intelligence in the workplace

By Anne E. Villanueva

Many companies are looking to harness artificial intelligence (AI) to improve productivity and efficiency as AI’s capabilities continue to expand. In doing so, employers should be mindful of the evolving legal landscape surrounding the use of AI, including existing employment-related laws.

Legal risks under existing law

AI can be used for many workplace functions, such as preparing job descriptions and analyzing data to help predict which applicants would be most successful at the company. AI can also be used to analyze productivity, measure individual performance, and select candidates for promotion. Employees are also using chatbots to outsource time-consuming tasks, such as drafting emails, writing code or performing research. Chatbots survey and pull from troves of publicly available information.

To the extent an employer uses AI in connection with recruitment, hiring, and promotion, there is a risk that the employer may run afoul of applicable federal and state anti-discrimination laws. For example, screening applicants and basing hiring decisions on criteria such as ratings, pay, and titles may skew in favor of white or male applicants. In addition, AI might make legally impermissible inferences based on an applicant’s religion, age, sexuality, genetic information, or disability status learned from either the internet, social media, or the applicant’s resume or interview.

There are also data privacy concerns associated with use of AI. If confidential or proprietary information is entered into a company’s AI system or is submitted in a query to a chatbot, that information might be incorporated into the chatbot’s database and inadvertently disclosed in a subsequent response to an unrelated party’s query.

In the event that use of AI results in employee layoffs, employers must comply with anti-discrimination laws when selecting those who will be separated from employment, as well as the requirements of the federal Worker Adjustment and Retraining Notification and equivalent state and local laws, which govern notice obligations in connection with plant closings and mass layoffs.

To the extent employers use AI in robotic systems or other machinery, they must comply with the Occupational Safety and Health Act and equivalent state laws and regulations, which require that employers provide a workplace free from recognized hazards.

New AI legislation and guidance

Federal, state, and local governments have considered legislation to supplement existing employment laws and taken other steps in response to the rapid increase in the use of AI in connection with employment.

Although no comprehensive federal legislation has passed, there are several bills in Congress addressing different aspects of AI. For example, the Algorithmic Accountability Act of 2022 bill directs the Federal Trade Commission (FTC) to require entities to conduct impact assessments for bias, effectiveness, and other factors when using automated decision systems to make critical decisions.

The White House released a Blueprint for an AI Bill of Rights and an AI Risk Management Framework, and President Joe Biden signed an executive order related to bias and discrimination with respect to the use of AI.

On May 12, 2022, the Equal Employment Opportunity Commission (EEOC) issued technical guidance, noting that using AI to evaluate video interviews without providing notice and the opportunity for applicants to request a reasonable accommodation can violate the Americans with Disabilities Act.

On May 18, 2023, the EEOC issued technical guidance explaining that if an AI software causes individuals with protected characteristics to be selected at a substantially lower selection rate when compared to individuals of another group, the employer’s use of that tool would violate Title VII.

At the state level, Maryland’s House Bill 1202, which went into effect on Oct. 1, 2020, prohibits employers from using a facial recognition service for the purpose of creating a facial template during an applicant’s pre-employment interview, unless the applicant consents by signing a specified waiver. Illinois enacted the Artificial Intelligence Video Interview Act in 2022, which imposes requirements on employers that analyze video interviews with AI. Many other states, including California, New Jersey, New York, Vermont, and Washington, D.C., have proposed or are otherwise considering legislation to regulate AI use in hiring and promotion.

At the local level, New York City Local Law 144, which went into effect on July 5, 2023, sets forth limitations and requirements for employers using automated employment decision tools (AEDTs) to screen candidates for hire or promotion. The law is applicable to: (1) jobs with an office location in New York City, at least part time, (2) fully remote positions associated with an office in New York City, and (3) employment agencies located in New York City. The law prohibits use of AEDTs unless the tool has been the subject of an independent bias audit within the past year and requires employers to post on their websites a summary of the results of the most recent bias audit. The law does not require any specific actions based on the results of a bias audit. Employers and employment agencies must provide notices of the use of AEDTs to employees and candidates for employment who reside in New York City, including instructions for requesting an alternative selection process or reasonable accommodation. Employers in violation face a civil penalty of up to $500 for a first-time violation and between $500 and $1,500 for successive violations.

Best practices

Employers who use AI technologies for hiring, screening and promotion, should consider, among other things, (i) auditing the technology to ensure selection rates do not violate anti-discrimination laws, and (ii) clearly informing applicants and employees about the type of technology that will be used and the information being measured.

Employers should also consider implementing policies and procedures governing employees’ use of AI at work. Employers should train staff regarding use of — and potential discriminatory issues associated with — AI tools and the need to verify the accuracy of any AI-generated content.

Anne E. Villanueva is a partner at Skadden, Arps, Slate, Meagher & Flom LLP.

#374018

For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com