This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Criminal,
Technology

Jul. 24, 2024

Navigating the complex terrain of AI litigation and regulation

See more on Navigating the complex terrain of AI litigation and regulation

Artificial intelligence (AI) scams are proliferating, with fraudsters using AI to trick people into sending money that they will likely never see again. The California Senate has passed a bill that requires safety standards for large AI models and protects whistleblowers who report noncompliance with the Act. By Joseph W. Cotchett Jr. and Gia Jung

Joseph W. Cotchett Jr.

Founding Partner, Cotchett, Pitre & McCarthy LLP

840 Malcolm Rd #200
Burlingame , CA 94010

Phone: (650) 697-6000

Fax: (650) 697-0577

Email: jcotchett@cpmlegal.com

UC Hastings COL; San Francisco CA

Joseph W. Cotchett is a founding partner of Cotchett, Pitre & McCarthy LLP and one of the foremost trial lawyers in the country, with over 50 years of experience litigating complex civil fraud, antitrust, securities, and mass torts cases. He is also the author of several books on Federal and California evidence.

Gia Jung

Attorney, Cotchett, Pitre & McCarthy LLP

Gia Jung is an attorney at Cotchett, Pitre & McCarthy LLP specializing in complex commercial litigation, consumer class actions, and artificial intelligence and cyber frauds. Jung has lectured on AI nationally and has handled several AI cases.

The road of artificial intelligence (AI) litigation and regulation is very underdeveloped, uncertain, and evolving rapidly as legislatures and the courts struggle to catch up to the rapid pace of innovation. Litigants are met with numerous unknown hurdles to obtain relief from harm, and regulatory agencies are placed in a precarious position through the Supreme Court's latest decision on Chevron. Billions of dollars have been taken around the world from innocent people by the use of fraudulent AI.

THE OVERALL HARM

Harm caused by AI is proliferating. Every day, Americans fall victim to a scam that uses artificial intelligence in one form or another to convince them to send money that they will likely never see again. Many of these scams are based in "deepfakes"--mimicry by AI of a person's voice and/or appearance--which need mere seconds of audio or video to create a convincing dupe. Using AI tools, an outgoing voicemail message might be enough to replicate a voice. Scammers are using these tools in a multitude of devious ways. Deepfakes of voices have been used to trick voice ID security systems for banks, trick people into thinking their family member had been kidnapped, arrested, or otherwise faced an emergency that required quick money, and create videos of celebrities or officials that seem to certify an investment opportunity. By impersonating trusted individuals with unprecedented realism, scammers are able to manipulate victims into divulging sensitive or authorizing fraudulent transactions.
Victims are often left with no recourse, as the scammers are nearly untraceable, and the bank accounts linked to the fraudulent transaction vanish as soon as the money is received by them. The FBI and FTC have continued to warn the public about the prevalence and efficacy of these scams. The FTC in particular advises that if you think you have been scammed, to immediately contact your bank or payment service and report the fraudulent transfer and hope that the charge is reversed. But for many, the bank simply permitted the fraudulent transfer, the charge went through, and the money is lost. For those who fall victim to cryptocurrency scams, that money is almost impossible to recover as those transactions are typically irreversible.

THOSE MAKING SCAMS POSSIBLE

It is not only the scammers who are at fault. Many internet platforms like YouTube, X (formerly Twitter), and foreign platforms allow cryptocurrency and other AI scams to flourish on their sites unchecked, as the profiles that post them are verified by YouTube, X, and other platforms as legitimate, and the videos and posts are allowed to stay up long after they are reported as fraudulent. Banks, too, allow these scams to thrive, as they approve clearly fraudulent transactions, including by allowing elderly customers to transfer nearly their entire savings accounts in one or two transactions in complete odds with their prior transaction history. These intermediaries are among those that most need to take more responsibility--both by actively deterring these schemes and by providing recourse for those victims who relied on their services.

INTERNET PLATFORMS HIDE BEHIND LEGAL PROTECTIONS

Litigation hoping to provide relief for plaintiffs affected by the harmful use of AI is, at best, narrowly surviving. The Communications Decency Act (Section 230), adopted by Congress in 1996, has long been the reigning and nearly insurmountable hurdle for getting any relief against internet platforms. But narrow cracks in the otherwise impenetrable fortress of Section 230 are starting to appear. For example, in March of this year, the California Court of Appeal for California's Sixth Appellate District reversed and remanded an action against YouTube that had been thrown out on demurrer under Section 230. Wozniak v. YouTube, LLC (Cal. Ct. App., Mar. 15, 2024, No. H050042) 2024 WL 1151750. In doctored videos circulated on YouTube, Steve Wozniak and other celebrities appeared to promise that viewers who sent in bitcoin would receive double back, the scam made more realistic by the videos appearing on "verified" YouTube Channels. The Court of Appeal held that YouTube could be liable for the information it provided in its "verification badges" that indicated that the channel could be a trusted source. These same scams are still prolific on YouTube, with fraudsters now using deepfakes to defraud users out of their money. As this case goes forward, it may be able to shift at least some of the responsibility for rampant AI scams to internet providers until affirmative legislation is passed.

ELDERS AT RISK

For elders deceived by deepfakes, even in California, which is supposed to have some of the strongest protections for financial elder abuse in the country, litigation hoping to hold banks or others responsible for clearly fraudulent transactions that drain the bank accounts of their senior customers is fraught with roadblocks and barriers for customers. Only egregious cases are being taken, but with a slim chance of success. Banks are following the playbook they have developed over the last decade and a half, removing plaintiffs to federal court and dismissing them under federal case law relying on Das v. Bank of America, 186 Cal. App. 4th 727 (2010), a case which predates the 2013 California statutory amendment that was supposed to make it easier for plaintiffs to recover from banks that allowed fraudulent transfers. The Ninth Circuit in Gray v. JPMorgan Chase Bank, N.A., No. 23-55318, 2024 WL 1342619 (9th Cir. Mar. 29, 2024) recently affirmed this standard, holding that under Das, a bank "assisted" financial abuse only if it had actual knowledge or intentionally assisted in carrying out the fraudulent scheme. Of course, this is a very difficult standard to achieve--in Gray, because the bank asked no questions about the purpose or unusual nature of Mrs. Gray's nearly $35,000 transfer to Bangkok Bank, it cannot be held liable. This standard only serves to discourage banks from ensuring the welfare of their senior clients and allow clearly fraudulent--or at least highly likely to be the result of scam--transfers to proceed unchecked and unquestioned.

FEDERAL LEGISLATION

Federal legislation that would set up protections against the misuse of AI or at least reduce the roadblocks for citizens to recover from AI harm are circulating in Congress but are far from being executed. There are over 30 bills currently being discussed in Congress that have been introduced since 2023. These include comprehensive frameworks for legislating, as well as several targeted bills that would regulate some aspects or industries of AI, such as disclosure, the use of AI in the workplace, and deepfakes. As to Section 230, both the Blumenthal-Hawley proposed framework for substantive AI legislation announced in September 2023 and the more targeted No Section 230 Immunity for AI Act introduced in June 2023 would deny Section 230 to internet platforms for damages from AI-generated content.

REGULATORY AGENCIES

Now, with the Supreme Court's decision overturning Chevron, the landscape of regulating AI through litigation and regulation is even further complicated and stymied. Indeed, at oral argument in Relentless, Inc. v. Dept. of Commerce, Justice Kagan explicitly noted as to any forthcoming Congressional bill on AI, that "Congress knows that there are going to be gaps" and that "Congress knows that this Court and lower courts are not competent with respect to deciding all questions about AI that are going to come up in the future. And what Congress wants, we presume, is for people who actually know about AI to decide those questions." This could not be truer for future litigation.
Regulatory agencies are already stretching the boundaries of Congressional mandates to address AI. For example: In the National Defense Authorization Act for Fiscal Year 2021, Div. E, §5002(3), Congress defined AI as "a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments." This is a relatively strict definition that the FTC has already recognized is not useful in accomplishing its regulatory mission. In 2022, the FTC submitted a report to Congress, focused on AI. In that report, the FTC stated that it "assume[s] that Congress is less concerned with whether a given tool fits within a definition of AI.... In other words, what matters more is output and impact." As such, to execute its regulatory mission, the FTC has already broadened the scope of products or tools it considers within the AI regulatory framework that are not necessarily AI by a stricter definition. And use of this broader scope has already resulted in FTC enforcement actions against misleading consumers via such methods as fake dating profiles, phony followers, deepfakes, chatbots, false advertisements, and fake sales.
Shifting regulatory power from agencies like the FTC to the courts gives well-resourced tech companies with armies of lawyers even more power to fend off regulations they view as harmful. Any agency attempting to fill regulatory gaps will undoubtedly immediately be met with litigation clamoring for a narrow and strict interpretation of laws that are drafted without every use-case or unseen innovation in mind.

CALIFORNIA LEGISLATION

California is at the forefront of comprehensive legislation for AI. California has had laws targeting AI since 2019, with the passage of AB 730, which prohibited the use of deepfakes to influence political campaigns, and AB 260, which created a private right of action for deepfakes and pornography, both of which became effective in January, 2020. A slew of new bills was introduced in the California legislature last year and this year, including a broad measure called the "Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act" (SB 1047). On May 21, 2024, the California Senate passed SB 1047 with bipartisan support (32-1). Introduced by Senator Wiener with co-authors Senators Roth, Rubio, and Stern, SB 1047 would require safety standards for large AI models, protect whistleblowers that report noncompliance with the Act, and empower California's Attorney General to take legal action in the event the developer of a powerful AI model causes severe harm to Californians, or if the developer's negligence poses an imminent threat to public safety. The bill is now in the Assembly, where it must pass by Aug. 31, 2024.

ELDER ABUSE LEGISLATION

Additionally, while not addressing AI directly, California Senate Bill 278, introduced in February of 2023, would amend the Elder Abuse and Dependent Adult Civil Protection Act to strengthen financial abuse protections by clarifying the duties of banks and financial institutions to safeguard against fraud. This bill would provide California elders with additional protections against deepfakes and phishing scams by enabling them to recover when their banks fail to safeguard their life savings against the malicious use of AI.

THE IMMEDIATE FUTURE

The landscape of AI is not evolving as rapidly as AI itself, but as lawyers and legislators alike respond to the clear harms and unchecked bad actors, there are inevitable new and old hurdles to address and overcome. The litigation and evidentiary issues are going to be overwhelming. To ensure that AI is created, developed, and used safely, there needs to be additional accountability measures for the developers, platforms, and accomplices that allow harm to come to Californians through the inappropriate or malicious use of AI. Until those strong protections are affirmatively in place, lawyers and regulatory agencies, to the extent possible, will continue to try to fill the gaps and carve out protections and remedies for those affected by AI harms. Taking on the fraud posed by AI has many lawyers and legislations at a loss--but they better move quickly to keep our society viable.
Joseph W. Cotchett is a founding partner of Cotchett, Pitre & McCarthy LLP with over 50 years of experience litigating complex civil fraud, antitrust, securities, and mass torts cases. He is also the author of several books on Federal and California evidence. Gia Jung is an attorney at Cotchett, Pitre & McCarthy LLP specializing in complex commercial litigation, consumer class actions, and artificial intelligence and cyber frauds.

#379869

For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com