New York City now requires that organizations must inform job applicants that they are using AI as part of the hiring process. These organizations also must have annual audits conducted by independent parties to make sure the software is devoid of bias. If any violations are found, companies will be fined AND will have to publish the results of these audits.
Washington, D.C. and states like California, New Jersey, and Vermont are also devising their own strategies to regulate hiring with AI.
A variety of companies offer AI tools to automate the hiring process. Many of these companies say they ensure that bias does not impact results. However, there are examples where that didn’t happen.
Amazon’s automated recruitment system was intended to evaluate applicants based on their suitability for various roles. Because women had been underrepresented in technical roles in the past, the AI system thought that male applicants were preferred. Consequently, it penalized resumes from female applicants. Despite making changes, it was no surprise that Amazon eventually ditched the initiative in 2017.
The NYC law and others that are being proposed hope to provide guardrails against systems like the Amazon one. Washington, D.C., is considering a law that would hold employers accountable for preventing bias in automated decision-making algorithms. In California, two bills that aim to regulate AI in hiring were introduced this year. And in late December, a bill was introduced in New Jersey that would regulate the use of AI in hiring decisions to minimize discrimination.
As of July 1, hiring teams in New York City are now subjected to new regulations governing the use of AI in hiring, also defined by the Department of Consumer and Worker Protection as “automated employment decision tools”, or AEDTs. The purpose of this new measure is to ensure companies act with fairness and equal consideration when hiring with AI, a technology that has sparked concern over its questionable objectivity.
View referenced article