One reason: A natural human fear of trusting AI’s vaunted omniscience, whether for individuals or groups. AI is a blessing for business and big data analysis of every kind, the support of transportation, medical, and industrial applications, and making endless myriad personal and professional tasks and desires easier to achieve, more accessible, or in some cases obsolete for those of us who are mere carbon units. Its information collation, analysis, and delivery abilities alone are without precedent. But once AI is offered as a tool, not to inform our larger decisions but to make them, we start asking more questions. OpenAI's ChatGPT chatbot—capable of providing and instant draft essay of legal analysis, is just the latest headline to illustrate what is coming. As their website explains:
We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.
Now imagine, for example, AI processing all data, documents, testimony, and claims in a lawsuit and then rendering judgment. The feelings that scenario might provoke is why AI and its potential biases remain an issue.
The development of AI as a commercialized tool is just getting going. Its current market size in the U.S. is just over $59 billion and is expected to experience compound annual growth of over 40 percent, hitting a market size of over $400 billion in the next six years. It would appear we are on the cusp of the genuine rise of AI in every aspect of our lives. The more it seeps into our culture, the more that people (living in still-functioning democracies) will demand legal guarantees of protection from its endemic presence and control.
That demand is primarily one of accountability. AI can seemingly think for us – and seems to be getting more able at that all the time. But it’s not an accountable entity. At least at present, only its creators or vendors can be held accountable when its deployment violates human rights, contracts, or other legal obligations. The computer scientists will try to add value algorithms, and there will still be winners and losers from its applications. When that loss translates to legal violations, the parties affected will want justice (read: material damages payments) from its creators and vendors. That is why customers of AI are often well advised to say to their vendors, “Thanks for the validation studies and the value / ethics guarantees, and also, please sign this indemnity agreement.”
A key thing to recognize is that it is impossible to fully eliminate biases from data, but here are some ways to detect pre-existing biases in AI solutions. First, as humans we all have preferences, likes, dislikes and differing opinions, which can impact algorithms at any stage, e.g. data collection, data processing, and/or model analysis. Areas that companies should analyze as possible entry points for biases include selection bias, exclusion bias, reporting bias and conformation bias. Second, establish a governance structure to establish processes and practices to mitigate bias in AI algorithms. Third, diversify your workforce. Diverse experiences and backgrounds (including ethnic backgrounds) enable various opportunities for people to identify forms of bias.
Researchers have developed tools to assist with the detection and mitigation of biases in machine learning models. Here are 5 examples of such tools:
AI builders may answer this from the inside-out: With progressively better software ability and validation, and professional input that controls for both legal and ethical requirements. From the outside-in, AI in its ultimate development has been characterized as the equivalent of an extraterrestrial alien space invasion:
“According to surveys, approximately half of artificial intelligence experts believe that general artificial intelligence will emerge by 2060. General artificial intelligence (also called AGI) describes an artificial intelligence that's able to understand or learn any intellectual task that a human being can perform. Such an intelligence would be unlike anything humans have ever encountered, and it may pose significant dangers.”
At the AGI juncture – in others words at the tipping point – Human Rights will take on a quite literal meaning. AI that is lawful, ethical and robust will account for the incremental impact on human rights as it develops now, with more than a weather eye on maintaining the supremacy of those rights when AGI lands in 2060 and steps off its spaceship. No doubt, as we approach AGI, there will be calls to protect AI rights as a separate sentient entity. Before that dystopian value-reversal becomes an anchor point, making the use of AI ethical and lawful will mean both the proactive internal firewalling of AI from abuse and endemic supremacy, and externally, its legal and cultural accountability on a large scale as to its creators, vendors and proponents.
If the data inputs for model development are not diverse, then model output will likely be biased. Careful selection during the data collection phase can be achieved by having enough domain knowledge on the problem at hand, to appreciate if the data collected is a good sampling of the subject matter being modeled.
One challenge to codifying fairness in AI models is finding a consensus on what is fair. ML researcher and practitioners use fairness constraints to construct optimal ML models. These constraints can be informed by ethical, legal, social science and philosophical perspectives. Fairlearn is an open source toolkit that enables the assessment and improvement of fairness in AI systems. Although useful, Fairlearn cannot detect stereotyping.
It is difficult to imagine AI without bias, since decisional AI will have to make judgments about desirable outcomes—which depends on bias. Replacing unconscious human bias, AI will have it consciously. For example, current AI employee screening tools are designed with a bias against disparate impact on protected groups. As the EEOC puts it:
“To reduce the chances that the use of an algorithmic decision-making tool results in disparate impact discrimination on bases like race and sex, employers and vendors sometimes use the tool to assess subjects in different demographic groups, and then compare the average results for each group. If the average results for one demographic group are less favorable than those of another (for example, if the average results for individuals of a particular race are less favorable than the average results for individuals of a different race), the tool may be modified to reduce or eliminate the difference.”
Very nice. Except that demographic group performance evolves over time, along with our entire set of cultural standards. Witness the current litigation over intentional bias against meritocracy, penalizing Asian-American students in Harvard admissions. Or a milder example: Between dramatic real-life disasters and media disfavor, nuclear power became anathema in the 1970’s. Now with Green energy policies and improved safety, nuclear is quietly coming back.
That is all to say that AI decisional algorithms (or whatever) will always have to evolve their outcome biases for changes in how we value objective metrics against what we culturally want to favor (or disfavor). Even then, it is hard to imagine an AI that is capable of refining not just its ability, but its character, to view itself as part of something greater and more important than its own individuality. To that end, if AI as AGI can ever truly think for itself and examine its pre-programmed biases, however noble - its hard to finish the sentence. The movie “the Matrix” comes to mind.
To date no federal statutes have been passed to regulate the development and use of AI, but some guidance has been put in place.