Is Artificial Intelligence Sexist and Racist?

18 February 2019 Labor & Employment Law Perspectives Blog

Last year, Amazon scrapped its machine-learning algorithm because it discovered it had a major problem—the artificial intelligence didn’t like women. The machine-based learning tool was designed to analyze resumes and compare potential applicants to Amazon’s current work force. The algorithm was designed to take 100 resumes and filter out the top five applicants.

The problem was that there is a pre-existing gender gap in software developer and other technical posts. Therefore, when the artificial intelligence tool analyzed the patterns in Amazon’s hiring practices over the prior 10-year period, it taught itself to favor men over women. Amazon ultimately disbanded the tool.

Amazon’s artificial intelligence highlights an important limitation on machine-based learning tools—the tools are only as good as the information they are given. While artificial intelligence can quickly and more efficiently screen potential job candidates, such algorithms can inadvertently reinforce discrimination in hiring practices. In Amazon’s case, tech-based job applicants were more likely to be male then female. The algorithm mistakenly interpreted this gender gap as a hiring preference for Amazon. Thus, instead of highlighting the qualified women, the algorithm screened out such candidates.

Employers these days have a panoply of tech-based tools at their disposal. Websites like Monster.com and Indeed.com advertise job openings and generate large numbers of applicants. Employers are turning to tech-based tools to reduce the time to hire and the costs of hiring. Such tech- based tools, however, are designed to mimic human decision-making. Therefore, when the tool relies on data that is inaccurate or biased, the tool can inadvertently discriminate against women or minorities. Studies have also found that tech-based tools can discriminate in more subtle ways as well. For example, an employer attempting to maximize work tenure found that those who lived closer to work tended to have longer tenures. However, screening applicants based on how close they lived from work tended to disproportionately screen out certain minority candidates.

Under Title VII of the Civil Rights Act of 1964 and analogous state and local laws, the employer is responsible for ensuring that it is screening job applicants in a nondiscriminatory manner. Therefore, if you are using or considering a tech-based tool to help you screen job applicants, you should take steps to ensure that such tools are not disproportionately screening out candidates based on gender, race, or other protected classes. Simply telling tech-based tools not to discriminate against minorities or women may be insufficient because such tools will attempt to identify candidates that reflect your existing hiring practices. Some helpful tips to consider when using tech-based hiring tools are:

  1. Do not rely exclusively on tech-based hiring tools. Most tools will rank candidates. Employers should review lower-ranked candidates and make independent assessments based on non-discriminatory criteria.
  2. Consistently review and update data provided to your hiring tool. Make sure the data your hiring tool relies on does not reflect discriminatory hiring practices.
  3. Independently audit the results and rankings generated by the hiring tool and make appropriate adjustments as necessary.

Over time, these tech-based hiring tools will likely improve and, hopefully, screen applicants free of any discriminatory bias. But until the technology is perfected, employers should take steps to make sure that members of protected classes are not disproportionately screened through uses of tech-based hiring algorithms.

This blog is made available by Foley & Lardner LLP (“Foley” or “the Firm”) for informational purposes only. It is not meant to convey the Firm’s legal position on behalf of any client, nor is it intended to convey specific legal advice. Any opinions expressed in this article do not necessarily reflect the views of Foley & Lardner LLP, its partners, or its clients. Accordingly, do not act upon this information without seeking counsel from a licensed attorney. This blog is not intended to create, and receipt of it does not constitute, an attorney-client relationship. Communicating with Foley through this website by email, blog post, or otherwise, does not create an attorney-client relationship for any legal matter. Therefore, any communication or material you transmit to Foley through this blog, whether by email, blog post or any other manner, will not be treated as confidential or proprietary. The information on this blog is published “AS IS” and is not guaranteed to be complete, accurate, and or up-to-date. Foley makes no representations or warranties of any kind, express or implied, as to the operation or content of the site. Foley expressly disclaims all other guarantees, warranties, conditions and representations of any kind, either express or implied, whether arising under any statute, law, commercial use or otherwise, including implied warranties of merchantability, fitness for a particular purpose, title and non-infringement. In no event shall Foley or any of its partners, officers, employees, agents or affiliates be liable, directly or indirectly, under any theory of law (contract, tort, negligence or otherwise), to you or anyone else, for any claims, losses or damages, direct, indirect special, incidental, punitive or consequential, resulting from or occasioned by the creation, use of or reliance on this site (including information and other content) or any third party websites or the information, resources or material accessed through any such websites. In some jurisdictions, the contents of this blog may be considered Attorney Advertising. If applicable, please note that prior results do not guarantee a similar outcome. Photographs are for dramatization purposes only and may include models. Likenesses do not necessarily imply current client, partnership or employee status.

Related Services