Learn more about the Automation Bias and Discrimination in the Hiring Process
By Attorney Andy Hilger
Hiring decisions are time consuming, expensive, and important. Some employers are either using or considering using artificial intelligence (AI) and automation to streamline the process. These practices are becoming more widely available and popular for some employers in making hiring decisions. While the use of AI is typically limited to the initial screening of applicants, in some cases, it extends to preliminary interviews where it considers vocabulary, facial recognition, the tone/inflection of a person’s voice, and other criteria.
Employers need to be aware of the potential pitfalls and exposure for discrimination claims based on disparate impact under Title VII and the ADEA. Disparate impact claims arise when an employer uses a neutral policy or practice which has disproportionate effect on members of a protected class and that practice is not justified by a business necessity. If the use of a computer algorithm results in excluding members of a protected class from being considered, this could give rise to a discrimination claim. This remains true even if the computer was not “taught” to exclude members of a protected classes. Disparate impact claims look at what the results of the employer’s actions are, not merely the intent of the employer.
The AI systems are also susceptible to attack for the same reasons as the standardized testing employers used in the 1970s, which were later determined to be discriminatory. This testing resulted in excluding members of certain protected classes from consideration. The main issue was the lack of correlation between the standardized testing, on the one hand, and job relatedness and job performance, on the other. Automated decisions could have similar issues. For example, if a system learns that geographic location to the employer is an indicator of a good employee, the effect of this could decrease the amount of diversity in the applicants, not for a facially discriminatory purpose, but for a factor which is not job related or an indicator of job performance.
In one example, a company scrapped the idea of using automation after discovering the algorithm’s highest indicators of success were the name “Jared” and that the candidate played “High School Lacrosse.” This may sound like a bad joke until you consider the effect of this criteria. In this example, women are very likely to be excluded and high school lacrosse is generally only available at more affluent and typically less diverse schools.
Proponents of AI and automation in hiring decisions suggest that it is actually more inclusive to members of protected classes because the processing power is able to view and consider more applicants than a human being ever could. In addition, they suggest it is impossible for a computer to have implicit biases. While it might be true that a computer does not have implicit biases, it can have learned biases through no fault of its own.
If an employer decides that AI and automation in the hiring process is the right approach, they need to be aware of the potential for discrimination claims and take the necessary steps to avoid all forms of discrimination, including disparate impact. Employers using AI should make sure they know precisely what factors the computer is looking at in the decision-making process. In addition, employers should audit the results of the software because discrimination claims based on disparate impact consider the results more than the process.