(Artificially) Intelligent Hiring: Good Solution or Technology Gone Mad?

24 June 2019 Labor & Employment Law Perspectives Blog
Author(s): John L. Litchfield

“Success in creating [Artificial Intelligence] would be the biggest event in human history.  Unfortunately, it might also be the last, unless we learn how to avoid the risks.”  It’s almost as if Stephen Hawking, when he made that statement in 2014, was portending how employers might misuse artificial intelligence (AI) in work force management.  The use of AI in recruiting and interviewing employees is an emerging trend that brings efficiency and innovation to the hiring process, but is also fraught with potential liabilities if it is not carefully applied. 

If you are unfamiliar with AI in recruiting, you are not alone, as the trend is so new there is little regulation of the practice outside the use of traditional labor and employment considerations.  But just as AI is transforming the way businesses interact with their customers, it is now able to screen applicants and conduct take interviews. 

Yes, that’s right: a robot, not a human, can be used to interview your applicants.  Several platforms are available at the moment, including AI that utilizes video chat features to analyze an applicant’s voice and text sentiment, facial expression and body language, speech recognition, workplace competencies, abilities and qualifications for a particular job, and more.  All of this comes together to create an interview platform that delivers real-time, human-like conversations with candidates and can understand and respond to complex statements, changed answers, and interjections.  Moreover, the platforms are equipped with continuous learning technology that improves with each interview conducted.

Sounds great, right?  Well, it probably is for some companies (though we still believe there is nothing as effective as a human-to-human interview), but employers should be careful about how they use such technology, as it carries with it some of the same risks that exist in any hiring scenario.  Thoroughly vetting AI vendors and their products should be of primary concern.  Human monitoring of the technology and its propensity to learn or develop biases on the basis of protected categories – race, gender, age, national origin, sexual orientation, gender expression and identity, disability, marital status, veteran status, etc. – must be part of any AI implementation.  For example, some disabilities that impact speech or facial expression may unintentionally screen out otherwise qualified candidates on the basis of disability.  Because employers cannot ask applicants about accommodations needed to perform essential job functions before extending an offer, they should be particularly vigilant about how an AI program assesses (or fails to properly assess) a candidate’s qualifications for a particular job apart from any physical or mental disabilities.  Moreover, employers may inadvertently screen out applicants on the basis of a protected characteristic such as age if an employer “teaches” the AI that younger candidates are more desirable than older candidates by regularly selecting younger candidates for hire after an AI evaluation.

State regulators are catching on to the burgeoning AI hiring trend by imposing their own sets of rules.  The Illinois legislature, for example, recently passed the Artificial Intelligence Video Interview Act (AIVIA), which regulates how employers can utilize AI to analyze job applicants through video interviews.  Specifically, AIVIA will require employers to notify applicants of plans to use AI to analyze the video interview, and explain how the AI technology works and the characteristics it will use to evaluate candidates. 

Employers must obtain the applicant’s consent (written consent is not required) to proceed with the interview using AI, and cannot use AI to evaluate applicants without such consent. AIVIA also places limits on who can review applicant videos and requires destruction of such videos upon request by the applicant.  Importantly, the new law does not prohibit an employer from refusing to interview an applicant who does not consent to the use of AI.  No other states have yet adopted a law like AIVIA, but with the growing use of AI in work force management, new laws and regulations can be expected.

Regardless of whether an employer is hiring within or outside Illinois, some basic best practices should still be utilized when considering AI in hiring:

  • Vet the AI vendor and its software before purchase and consult with counsel to ensure that proper protections, such as indemnities to protect against disparate impact discrimination claims, are in place in a contract with an AI vendor.
  • Test the AI technology before purchasing, if possible, and, if not, before implementation, to ensure it meets your needs and hiring standards.
  • Understand basic hiring practices and pay close attention to any patterns that develop in the AI screening process that appear to be biased toward certain groups at the exclusion of others.
  • Provide written notice to applicants that AI may be used in the evaluation of candidate qualifications.
  • If an applicant asks for an accommodation on the basis of a disability, be sure to engage in the interactive process before proceeding with the use of AI to ensure that an otherwise qualified applicant is not inadvertently screened out on the basis of the disability.
  • Adequately train one or a few individuals to adequately monitor AI outputs and AI machine learning to address any biases that develop over time.

In the end, while AI can be a useful tool in hiring, it should not be used as a substitute for good judgment and human-to-human interaction somewhere in the hiring process.  As Steve Jobs aptly noted to Rolling Stone in 1994, “[t]echnology is nothing.  What’s important is that you have faith in people, that they’re basically good and smart, and if you give them tools, they’ll do wonderful things with them.  It’s not the tools that you have faith in – tools are just tools.  They work, or they don’t work.  It’s the people you have faith in or not.”

This blog is made available by Foley & Lardner LLP (“Foley” or “the Firm”) for informational purposes only. It is not meant to convey the Firm’s legal position on behalf of any client, nor is it intended to convey specific legal advice. Any opinions expressed in this article do not necessarily reflect the views of Foley & Lardner LLP, its partners, or its clients. Accordingly, do not act upon this information without seeking counsel from a licensed attorney. This blog is not intended to create, and receipt of it does not constitute, an attorney-client relationship. Communicating with Foley through this website by email, blog post, or otherwise, does not create an attorney-client relationship for any legal matter. Therefore, any communication or material you transmit to Foley through this blog, whether by email, blog post or any other manner, will not be treated as confidential or proprietary. The information on this blog is published “AS IS” and is not guaranteed to be complete, accurate, and or up-to-date. Foley makes no representations or warranties of any kind, express or implied, as to the operation or content of the site. Foley expressly disclaims all other guarantees, warranties, conditions and representations of any kind, either express or implied, whether arising under any statute, law, commercial use or otherwise, including implied warranties of merchantability, fitness for a particular purpose, title and non-infringement. In no event shall Foley or any of its partners, officers, employees, agents or affiliates be liable, directly or indirectly, under any theory of law (contract, tort, negligence or otherwise), to you or anyone else, for any claims, losses or damages, direct, indirect special, incidental, punitive or consequential, resulting from or occasioned by the creation, use of or reliance on this site (including information and other content) or any third party websites or the information, resources or material accessed through any such websites. In some jurisdictions, the contents of this blog may be considered Attorney Advertising. If applicable, please note that prior results do not guarantee a similar outcome. Photographs are for dramatization purposes only and may include models. Likenesses do not necessarily imply current client, partnership or employee status.

Related Services