This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Administrative/Regulatory,
International Law

Oct. 11, 2019

Gearing up for the EU’s next regulatory push

What should companies expect to see from the European Commission in terms of comprehensive artificial intelligence regulations?

H. Mark Lyon

Partner, Gibson Dunn & Crutcher LLP

Email: mlyon@gibsondunn.com

Mark is chair of the firm's Artificial Intelligence and Automated Systems Practice Group. The views, thoughts and opinions expressed herein are those of the author only and do not reflect the views of Gibson, Dunn & Crutcher LLP or any of its clients.

In May of last year, the European Union again took center stage by beginning enforcement of the General Data Protection Regulation -- the culmination of a six-year legislative process. It was one step among many legislative efforts by the European Commission to position the EU as a principle guardian (perhaps "the" principle guardian) of human rights, democracy and the rule of law in today's technological society. Since then, as observers predicted, fines have been swift and high, interpretations of the regulation have been relatively literal, and reaction to the regulation has been positive, at least in the eyes of the regulators themselves.

Of course, not all share such a rosy view of GDPR's impact. Smaller, EU-based startups in emerging technology areas like artificial intelligence may view the requirements of the GDPR as tantamount to an entry barrier. And while European leaders often urge the quick adoption of strong regulatory controls on the development and use of emerging technologies, such as AI, others in industry or developing AI technologies echo what may be generally considered the U.S. viewpoint (if there is one view): better to await further development of the underlying AI technologies, and adoption of related industry standards, before imposing unnecessary restraints on innovation through too-early legislation. Indeed, the EU's own expert advisory board called for restraint on hard regulations in the near term due to just such concerns.

But with the general perception among EU capitals and those in charge at the Commission that GDPR is an effective mechanism for protecting European values, it is unsurprising that the in-coming Van der Leyen administration has targeted other areas of emerging technologies for new legislation. Indeed, in setting out her five-year agenda this past July, Ms. Van der Leyen unequivocally stated that her administration will be proposing legislation directed at regulating AI within the first 100 days (meaning we should see a draft of such proposed legislation by early February, 2020). And while such a proposal will be just the start of a likely multi-year process before any regulation goes into effect, it seems clear that the European Commission intends for the EU to maintain its "first out of the gate" status in regulating the new post-digital age.

So, what should we expect to see from the European Commission in terms of a comprehensive AI regulation? While we can't know exactly what regulatory ground a proposal will cover until we actually see it, we can anticipate some likely areas of focus based on statements made by EU leaders, published writings of those who will likely be involved in the creation of a proposed AI regulation, and information already in the media based on reports of draft legislation making the rounds within the European Commission. For example, based on such sources, it seems likely the proposed legislation will:

• Address government funding of research, workforce training (or, perhaps more accurately, re-training), and availability of public data in connection with the overall EU effort toward a Digital Single Market.

• Not seek to significantly rewrite existing frameworks, but instead attempt to fit largely within the confines of the GDPR, the Directive on Copyright in the Digital Single Market, and the ePrivacy Regulation. However, even with such existing frameworks, questions remain, such as how should we handle non-personal data (not otherwise subject to an existing framework) that may be used by an automated system?

• Require, like some U.S. States -- notably California -- that any chat-bot or virtual assistant interacting with individuals will need to disclose that it is not a human but is instead a machine. In addition, requirements for transparency as to the use of data (personal or otherwise), the bases for decisions or recommendations, and perhaps necessary metrics such as confidence intervals or margins of error, will likely form part of the framework, in an effort to avoid unintended bias or disparate impact.

• Require accountability for failures or problems. In particular, now that machines may be taking actions or making decisions previously reserved for humans, shouldn't machine actions and decisions be found illegal if the same would have been illegal for humans? Putting aside the issue of intent for the moment, are there any actions that machines can legally take that humans could not? If the answer is "no," then one way to ensure parity between machines and humans would be to develop some form of abstraction of any intent element through the regulatory framework and codify that if illegal as human, it is illegal as machine.

• Require GDPR-like impact assessments to ensure AI systems do not perpetuate discrimination or violate fundamental rights or European values. In high-risk cases, such assessments might be at the governmental level, requiring certification or approval. In other instances, developer level assessments and reports may be sufficient. At the individual level, requirements providing some ability to seek out an explanation of how an AI system functions and affects the rights of the individual are also likely. It is also likely these assessments, as well as other requirements, will be deemed on-going obligations continuing throughout the lifecycle of the product or service, including regular updates.

Still, some matters remain unclear. Will the context of the AI system be considered? For example, will different standards apply to an AI system directed to recommending entertainment choices than to one tasked with approving or declining mortgage applications, or will the European Commission instead seek to apply a "one size fits all" approach regardless of context (a "technology neutral" approach similar to that in the GDPR)? Will the commission attempt to set performance criteria standards for when an AI is "safe enough" or "good enough" to replace human involvement? Will there be incentives for compliance or safe harbors to define approved conduct?

Ultimately, all that is certain is that more regulation from the EU directed to AI-based products, systems and services is coming. Given the intended reach of the GDPR, EU and non-EU companies alike will be well-served by paying close attention and joining the dialogue. 

#354718

Ilan Isaacs

Daily Journal Staff Writer
ilan_isaacs@dailyjournal.com

Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com