This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Brandon Reilly

| Jul. 24, 2024

Jul. 24, 2024

Brandon Reilly

See more on Brandon Reilly

Manatt, Phelps & Phillips, LLP

Brandon Reilly

Costa Mesa, Los Angeles
Brandon Reilly's expertise in privacy and data security has led him to advise on AI-powered technology, focusing on issue spotting and contracting issues. Since November 2023, he has been at the forefront of guiding organizations in various industries on the responsible deployment of AI.
"Meghan O'Gieblyn's book, 'God, Human, Animal, Machine,' really crystallized for me the transformative nature of AI and similar future technologies, not only with regards to its impact on technological innovation but on philosophy and religion as well," Reilly said. "In my own practice, over the past decade, it has been impossible to ignore the promise of big data and its potential uses for creating and training AI models, and with it, the great responsibility that we all have in respecting the risks incumbent with it."
Reilly has established himself as a thought leader on artificial intelligence, becoming a go-to resource for publications covering the topic. His insights have been featured in the Daily Journal, discussing the AI medical device industry, and in IAPP, commenting on a private-sector AI bill in Utah. His written client alerts keep his audience informed on AI-related developments.
With every new engagement in building AI governance programs, Reilly emphasizes the importance of learning, whether it's about novel AI use cases or emerging risks. He advocates for the application of compliance frameworks from privacy law to AI governance, highlighting the sensibility of this approach.
One of the significant challenges he identifies is the need for clear explanation and interpretation of AI models, a task that primarily falls on developers but extends to all stakeholders. This involves understanding the complex capabilities of foundational models and effectively communicating their functions to the general public.
"One of the biggest challenges is, and will remain, explaining and interpreting how AI models work," Reilly said. "Among the many stakeholders involved, that responsibility will rest first and foremost on developers. But it will involve all of us, whether it is the task of understanding sophisticated and often obscured capabilities of foundational models or of explaining those deployments to the average consumer."
When asked about trends, Reilly said one area of likely dispute will be who is responsible in the development chain when something goes wrong -- for example, an instance of discrimination, bias or fraud.
"Should businesses be responsible to understand every function and output of an AI deployment?" Reilly asked. "Can they be expected to know when and how the model does not function as intended or what caused it? Likewise, can the developer be expected to predict how its models will be deployed by each business that uses it? These are difficult questions, and I do foresee litigation over them in the future."

#379859

For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com