What should U.S. lawmakers, regulators, and companies think of the proposed E.U. approach to regulating artificial intelligence?

04 March 2020 Privacy, Cybersecurity & Technology Law Perspectives Blog
Authors: Chanley T. Howell

As artificial intelligence continues to be an advantageous boom for companies in nearly every sector, many executives and regulators alike are struggling to understand both the boons and risks AI will bring to both companies and customers. In wake of these pressures, the European Union recently issued a much anticipated white paper on artificial intelligence that attempts to set forth a “regulatory and investment oriented approach” to “promote the development and deployment of AI”. Given the E.U.’s impact on U.S. and global technology, privacy, and security laws, the white paper is instructional and informative even for U.S. organizations not doing business in the E.U.. Indeed, it is no secret that Washington and Silicon Valley are looking seriously into AI regulation, and will likely lean on this white paper as they develop their own AI’s strategies and policies.

A clear signal from the paper is that how laws and regulations classify and define AI, and the use cases there for it, will significantly impact the applicability of legal regimes to companies and their technology. To note, one of the fundamental elements of the paper is that the E.U. should have strict legal requirements for “high-risk” uses of AI technology. A high-risk is present when there is “a risk of injury, death or significant material or immaterial [in the sense of intangible, as opposed to not significant] damage; that produce effects that cannot reasonably be avoided by individuals or legal entities.” “Material” damage includes circumstances impacting safety and health of individuals, including loss of life, and damage to property. “Immaterial” damage includes occurrences such as loss of privacy, limitations on the right of freedom of expression, human dignity, and discrimination. 

For example, “high-risk” uses of AI could relate to a physician diagnosing and treating an illness, a car deciding what to do in a dangerous situation, or a bank determining who gets a loan and at what interest rate. The challenge, however, when it comes to regulation is determining the boundaries of what is and what is not within the high-risk “bucket.” We expect that many significant legislative and regulatory debates surrounding the white paper will focus on advocating for or against particular applications of AI to be deemed high-risk.

Another challenge the white paper posits is the inherent friction between AI innovation – which in large part relies on “black box” or secret algorithms) and AI transparency – how and why an AI tool makes the decisions that it makes. This latter concept is referred to as “explainable AI” or XAI. As companies often spend significant resources developing (and protecting the confidentiality of) AI technologies, many companies may push hard against revealing their non-XAI code and technology to prying eyes.

Advocates for greater transparency in AI contend that laws are, at least in part, intended to hold individuals and organizations accountable for their actions if they violate its requirements. Thus, to enforce accountability for a law, it is necessary to understand the root cause of the “problem.” Under current legal frameworks, if a smart car suddenly faced with an obstacle swerves onto a sidewalk and injures a person rather than swerving into oncoming traffic, the traffic ticket and lawsuit will likely be driver of the car and not the algorithm that made that decision to swerve. Accordingly, many regulators and lawmakers see a strong need for AI specific laws to include transparency provisions, so law enforcement, regulators and plaintiffs’ attorneys can look to companies to take responsibility for their own products. 

As a crucial point to consider, the white paper’s emphasis to regulate “high-risk” AI does not abolish regulatory scrutiny for other AI uses cases. Even some “low-risk” circumstances may be problematic or dangerous. We may not think there is much harm and how Amazon determines what products to recommend or how Netflix determines what movies to recommend, but such decisions can have meaningful and impactful outcomes. This kind of AI technology underpins targeted advertising, which can be more disturbing to consumers and lead to selection and discrimination problems. 

To illustrate the concerns with AI in the advertising space, consider how targeted advertising can result in users seeing Amazon adds on their favorite news or entertainment site for the same shoes they searched for on Amazon but did not buy (or maybe did buy) the day before. To display ads in this manner, marketing data aggregators build profiles on individuals based on things like what products they buy and what movies they rent. These profiles are then used by retailors (and can be shared with banks, insurance companies, and others) to “screen” customers, and potential customers. This, in turn, can result in “digital redlining” – the practice of discriminating (sometimes lawful, sometimes unlawful) against customers based on their marketing profile.

We will continue to monitor the legal and regulatory landscape related to artificial intelligence and machine learning and will provide updates as they progress. For several years now, we have seen the difficulties that differences among different countries’ and jurisdictions’ view of AI and its impact to individuals causes, including impacts into privacy and the potential for discrimination. These differing views tend to create substantial burdens for individuals and organizations that live and deal in multiple continents, countries, and states. However, the potential benefits of artificial intelligence are too important to weigh down with the anchors of significant jurisdictional legal differences. At the end of the day, in light of the global economy and particularly because technology developments know no borders, it is thus important for the United States and the E.U. (and other countries) to develop, if not the same, reconcilable regimens for regulating artificial intelligence. We shall see.

 
This blog is made available by Foley & Lardner LLP (“Foley” or “the Firm”) for informational purposes only. It is not meant to convey the Firm’s legal position on behalf of any client, nor is it intended to convey specific legal advice. Any opinions expressed in this article do not necessarily reflect the views of Foley & Lardner LLP, its partners, or its clients. Accordingly, do not act upon this information without seeking counsel from a licensed attorney. This blog is not intended to create, and receipt of it does not constitute, an attorney-client relationship. Communicating with Foley through this website by email, blog post, or otherwise, does not create an attorney-client relationship for any legal matter. Therefore, any communication or material you transmit to Foley through this blog, whether by email, blog post or any other manner, will not be treated as confidential or proprietary. The information on this blog is published “AS IS” and is not guaranteed to be complete, accurate, and or up-to-date. Foley makes no representations or warranties of any kind, express or implied, as to the operation or content of the site. Foley expressly disclaims all other guarantees, warranties, conditions and representations of any kind, either express or implied, whether arising under any statute, law, commercial use or otherwise, including implied warranties of merchantability, fitness for a particular purpose, title and non-infringement. In no event shall Foley or any of its partners, officers, employees, agents or affiliates be liable, directly or indirectly, under any theory of law (contract, tort, negligence or otherwise), to you or anyone else, for any claims, losses or damages, direct, indirect special, incidental, punitive or consequential, resulting from or occasioned by the creation, use of or reliance on this site (including information and other content) or any third party websites or the information, resources or material accessed through any such websites. In some jurisdictions, the contents of this blog may be considered Attorney Advertising. If applicable, please note that prior results do not guarantee a similar outcome. Photographs are for dramatization purposes only and may include models. Likenesses do not necessarily imply current client, partnership or employee status.

Related Services