As concerns about Artificial Intelligence (AI) continue to swell worldwide, the European Union (EU) is providing a regulatory roadmap for the international community. On May 11, 2023, the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs and Committee on Internal Market and Consumer Protection voted to approve the Artificial Intelligence Act. As written, the Act represents a global first in the approach to the legal risk management of AI that the U.S. and other nations will surely consider as AI undergoes rapid evolution. The Act sets a tone for the inevitable alignment between the U.S. and EU, as the desire for transatlantic collaboration, regulatory oversight, appropriate industry standards, and facilitation of economic partnerships continue to remain a priority. While the AI Act has been approved by lawmakers in the European Parliament, the legislation must still go through additional steps before it becomes law.
With constantly emerging capabilities for AI in the composition of music, creation of literature, and provisions of health services, the proposed Act provides critical principles for ensuring human oversight, safety, transparency, traceability, non-discrimination, and environmental friendliness in AI systems. It seeks to set a universal definition for AI that remains technology-neutral, accommodating both existing and future AI systems. Notably, the Act proposes a risk-based approach to AI regulation, wherein the obligations for an AI system correlate with the level of risk it may pose. The Act includes provisions that exempt research activities and AI components offered under open-source licenses. The legislation also advocates for regulatory sandboxes, i.e., controlled environments established by public authorities, for testing AI before deployment. This approach aims to balance the protection of fundamental rights with the need for legal certainty in businesses and the stimulation of innovation in Europe.
By contrast, federal legislators in the U.S. continue keeping a watchful gaze on AI, focusing more attention on funding research into deciphering its capabilities and outcomes. These efforts are fueled in part by the hope of understanding the breadth of AI to potentially mitigate concerns in the regulatory space. After all, advances made by AI technology may serve as mitigating tools to some of the risks identified through the Act’s key principles. The concept of federalism in the U.S. contributes to an already burdensome dilemma of regulatory enforcement due to a patchwork system of inconsistent state laws all hoping to be at the precipice of the next major technological revolution. Indeed, various states have already proposed laws regulating the development and use of AI. For example, California has proposed a law (AB 331) regulating the use of automated decision tools (including AI) and that would require developers of these AI tools and users to submit annual impact assessments.
AI applications are categorized into four levels of risk: unacceptable risk, high risk, limited risk, and minimal or no risk. Any application that presents an unacceptable risk is prohibited by default and cannot be deployed in the EU. This includes AI systems that employ subliminal techniques or manipulative tactics to alter behavior, exploit individual or group vulnerabilities, categorize biometrics based on sensitive attributes, evaluate social scoring or trustworthiness, predict criminal or administrative offenses, create or expand facial recognition databases through untargeted scraping, or infer emotions in law enforcement, border management, workplaces, and education. Minimal risk uses –by contrast – would include systems deployed for product/inventory management or AI-enabled platforms such as video games. Similarly, limited risk systems would include chatbot or other AI-based systems meeting disclosure standards necessary to give users the option to alternatively speak with a human.
The AI Act identifies the following uses as high-risk:
In the context of the AI Act, “social scoring” refers to the practice of evaluating individuals based on their social behavior or personality characteristics, often leveraging a wide range of information sources. This approach is used to assess, categorize, and score individuals, potentially affecting various aspects of their lives, such as access to loans, mortgages, and other services. The current draft includes a ban on social scoring by public authorities in Europe. However, the European Economic and Social Committee (EESC) has expressed concerns that this ban does not extend to private and semi-private organizations, potentially allowing such entities to use social scoring practices. The EESC has called for a complete ban on social scoring in the EU, and for the establishment of a complaint and redress mechanism for individuals who have suffered harm from an AI system.
The EESC has also urged that the AI Act should strive to distinguish between what is considered social scoring and what can be seen as an acceptable form of evaluation for a specific purpose. The line, they suggest, can be drawn where the information used for the assessment is not reasonably relevant or proportionate. Furthermore, the EESC highlights the need for AI to enhance human decision making and human intelligence, rather than replacing it, and criticizes the AI Act for not explicitly expressing this view.
A significant aspect of the Act pertains to the regulation of "foundation models", like OpenAI’s GPT or Google’s Bard. These models have attracted regulatory attention due to their advanced capabilities and potential displacement of skilled workers. Providers of such foundation models are required to apply safety checks, data governance measures, and risk mitigations before making their models public. Additionally, they must ensure that the training data used to inform their systems does not violate copyright law. The providers of such AI models would also be obliged to assess and mitigate risks to fundamental rights, health and safety, the environment, democracy, and rule of law.
The United States can expect some of the principles in the Act to show up in both federal and state legislative proposals as the nation seeks to bring the chaos brought on by AI into submission. As a result of longstanding transatlantic partnerships built on establishing commerce and trade, many U.S. companies are well acquainted with the higher standards of the EU in areas such as product safety rules and certain data rights. Consequently, we should expect this trend to continue as commerce between nations grows. The EU will likely continue requiring compliance from U.S. companies in order to conduct business across the Atlantic and we can expect the scope of such compliance to now involve AI. While there are a myriad of ways for these concepts to manifest themselves such as President Biden’s AI Bill of Rights, by mirroring certain provisions of the Act states may find themselves encouraged to develop their own regulatory schemes around the use of AI. Businesses will need to remain diligent to shifting regulatory structures and emerging enforcement mechanisms in the U.S. as the nation grapples with change. The ultimate goal of the EU proposal is to provide a regulatory framework for AI companies and organizations that use AI, thereby facilitating a balance between innovation and protection of citizens’ rights.