Thank you to co-author Lara Coole, a summer associate in Foley & Lardner’s Jacksonville office, for her contributions to this post.
Artificial Intelligence (AI) systems are poised to drastically alter the way businesses and governments operate on a global scale, with significant changes already under way. This technology has manifested itself in multiple forms including natural language processing, machine learning, and autonomous systems, but with the proper inputs can be leveraged to make predictions, recommendations, and even decisions.
Accordingly, enterprises are increasingly embracing this dynamic technology. A 2022 global study by IBM found that 77% of companies are either currently using AI or exploring AI for future use, creating value by increasing productivity through automation, improved decision-making, and enhanced customer experience. Further, according to a 2021 PwC study the COVID-19 pandemic increased the pace of AI adoption for 52% of companies as they sought to mitigate the crises’ impact on workforce planning, supply chain resilience, and demand projection.
Challenges of Global Regulation
For these many businesses investing significant resources into AI, it is critical to understand the current and proposed legal frameworks regulating this novel technology. Specifically for businesses operating globally, the task of ensuring that their AI technology complies with applicable regulations will be complicated by the differing standards that are emerging from China, the European Union (EU), and the U.S.
China has taken the lead in moving AI regulations past the proposal stage. In March 2022, China passed a regulation governing companies’ use of algorithms in online recommendation systems, requiring that such services are moral, ethical, accountable, transparent, and “disseminate positive energy.” The regulation mandates companies notify users when an AI algorithm is playing a role in determining which information to display to them and give users the option to opt out of being targeted. Additionally, the regulation prohibits algorithms that use personal data to offer different prices to consumers. We expect these themes to manifest themselves in AI regulations throughout the world as they develop.
Meanwhile in the EU, the European Commission has published an overarching regulatory framework proposal titled the Artificial Intelligence Act which would have a much broader scope than China’s enacted regulation. The proposal focuses on the risks created by AI, with applications sorted into categories of minimal risk, limited risk, high risk, or unacceptable risk. Depending on an application’s designated risk level, there will be corresponding government action or obligations. So far, the proposed obligations focus on enhancing the security, transparency, and accountability of AI applications through human oversight and ongoing monitoring. Specifically, companies will be required to register stand-alone high-risk AI systems, such as remote biometric identification systems, in an EU database. If the proposed regulation is passed, the earliest date for compliance would be the second half of 2024 with potential fines for noncompliance ranging from 2-6% of a company’s annual revenue.
Additionally, the previously enacted EU General Data Protection Regulation (GDPR) already carries implications for AI technology. Article 22 prohibits decisions based on solely automated processes that produce legal consequences or similar effects for individuals unless the program gains the user’s explicit consent or meets other requirements.
In the United States there has been a fragmented approach to AI regulation thus far, with states enacting their own patchwork AI laws. Many of the enacted regulations focus on establishing various commissions to determine how state agencies can utilize AI technology and to study AI’s potential impacts on the workforce and consumers. Common pending state initiatives go a step further and would regulate AI systems’ accountability and transparency when they process and make decisions based on consumer data.
On a national level, the U.S. Congress enacted the National AI Initiative Act in January 2021, creating the National AI Initiative that provides “an overarching framework to strengthen and coordinate AI research, development, demonstration, and education activities across all U.S. Departments and Agencies . . . .” The Act created new offices and task forces aimed at implementing a national AI strategy, implicating a multitude of U.S. administrative agencies including the Federal Trade Commission (FTC), Department of Defense, Department of Agriculture, Department of Education, and the Department of Health and Human Services.
Pending national legislation includes the Algorithmic Accountability Act of 2022, which was introduced in both houses of Congress in February 2022. In response to reports that AI systems can lead to biased and discriminatory outcomes, the proposed Act would direct the FTC to create regulations that mandate “covered entities”, including businesses meeting certain criteria, to perform impact assessments when using automated decision-making processes. This would specifically include those derived from AI or machine learning.
The Federal Trade Commission is Proactive
While the FTC has not promulgated AI-specific regulations, this technology is on the agency’s radar. In April 2021 the FTC issued a memo which apprised companies that using AI that produces discriminatory outcomes equates to a violation of Section 5 of the FTC Act, which prohibits unfair or deceptive practices. And the FTC may soon take this warning a step farther—in June 2022 the agency indicated that it will submit an Advanced Notice of Preliminary Rulemaking to “ensure that algorithmic decision-making does not result in harmful discrimination” with the public comment period ending in August 2022. The FTC also recently issued a report to Congress discussing how AI may be used to combat online harms, ranging from scams, deep fakes, and opioid sales, but advised against over-reliance on these tools, citing the technology’s susceptibility to producing inaccurate, biased, and discriminatory outcomes.
Potential Liability for Businesses in the U.S.
Companies should carefully discern whether other non-AI specific regulations could subject them to potential liability for their use of AI technology. For example, the U.S. Equal Employment Opportunity Commission (EEOC) put forth guidance in May 2022 warning companies that their use of algorithmic decision-making tools to assess job applicants and employees could violate the Americans with Disabilities Act by, in part, intentionally or unintentionally screening out individuals with disabilities. Further analysis of the EEOC’s guidance can be found here.
Broader Impact on U.S. Businesses
Many other U.S. agencies and offices are beginning to delve into the fray of AI. In November 2021, the White House Office of Science and Technology Policy solicited engagement from stakeholders across industries in an effort to develop a “Bill of Rights for an Automated Society.” Such a Bill of Rights could cover topics like AI’s role in the criminal justice system, equal opportunities, consumer rights, and the healthcare system. Additionally, the National Institute of Standards and Technology (NIST), which falls under the U.S. Department of Commerce, is engaging with stakeholders to develop “a voluntary risk management framework for trustworthy AI systems.” The output of this project may be analogous to the EU’s proposed regulatory framework, but in a voluntary format.
The overall theme of enacted and pending AI regulations globally is maintaining the accountability, transparency, and fairness of AI. For companies leveraging AI technology, ensuring that their systems remain compliant with the various regulations intended to achieve these goals could be difficult and costly. Two aspects of AI’s decision-making process make oversight particularly demanding:
- Opaqueness where users can control data inputs and view outputs, but are often unable to explain how and with which data points the system made a decision.
- Frequent adaptation where processes evolve over time as the system learns.
Therefore, it is important for regulators to avoid overburdening businesses to ensure that stakeholders may still leverage AI technologies’ great benefits in a cost-effective manner. The U.S. has the opportunity to observe the outcomes of the current regulatory action from China and the EU to determine whether their approaches strike a favorable balance. However, the U.S. should potentially accelerate its promulgation of similar laws so that it can play a role in setting the global tone for AI regulatory standards.