Implications of Using AI in Hiring and Recruiting
Artificial intelligence (AI) uses machines and software to perform tasks that human intelligence would ordinarily be required to complete. AI can mimic humans in a way that saves time and increases efficiency. The use of AI in hiring is growing rapidly due to a combination of faster turnaround times and reduced costs. But as the use of such technology becomes pervasive, regulations aren’t looming far behind.
With respect to employment decisions, the Equal Employment Opportunity Commission (EEOC) and the Department of Justice (DOJ) have issued recent guidance on the use of AI and warned that some AI processes may result in outcomes that violate pre-existing federal laws.
No Federal Laws for AI in Hiring. Yet.
While there is no federal law specific to AI in employment, nondiscrimination laws such as Title VII of the Civil Rights Act, the Americans with Disabilities Act (ADA), the Age Discrimination in Employment Act (ADEA), and the Genetic Information Nondiscrimination Act (GINA) continue to apply for most U.S. employers.
Additionally, the Federal Trade Commission (FTC) clarifies to employers that the FTC Act, the Fair Credit Reporting Act (FCRA), and the Equal Credit Opportunity Act (ECOA) apply to using AI in employment as well.
Further regulation may be on the horizon through the National Telecommunications and Information Administration (NTIA), which could be tasked with establishing guardrails for AI usage in many industries, such as employment. In Europe, proposals are in place to regulate AI using a system like its General Data Protection Regulation (GDPR), a massive and sprawling data protection law.
Regulating AI in Hiring and Recruiting
Two U.S. states and one major city have already picked up the baton to regulate AI in employment, with other states and municipalities likely to follow suit soon.
- In Illinois, employers that use AI to analyze video interviews must provide notice to the applicant and explain how their AI program works. They must also get the applicant’s consent, maintain confidentiality, and destroy any video interview copies upon the applicant’s request.
- The state of Maryland informs employers that use AI with a facial recognition feature to get the applicant’s consent and have the applicant acknowledge a consent waiver. The laws prohibit them from using facial recognition technologies to create a template from the applicant’s video interview without their consent.
- In New York City, employers cannot use an AI tool to screen job candidates or evaluate employees unless the tool has been audited for bias before its use and a summary of the audit’s results are publicly available on the employer’s website.
- Some other states (e.g., California, Connecticut, Texas, and Washington) have similar electronic monitoring and biometric privacy laws.
What Employers Can Do?
So how can employers potentially violate these laws when using AI for employment decisions? AI can hold or develop bias if not programmed and maintained correctly.
Employers risk lawsuits if their AI tool treats some individuals less favorably than others. Another risk of using AI in hiring is if it disproportionately excludes candidates in a protected class.
For example, Amazon developed an AI recruiting tool in 2014 that rated applicants in a gender-neutral way on the surface but had the actual effect of largely preferring male applicants to female applicants. Amazon would later scrap this tool in 2018.
The EEOC has made itself clear in recent guidance to employers that neutral tests or AI selection procedures can still violate Title VII by discriminating against individuals based on race, color, religion, sex, or national origin. The burden of compliance with applicable federal and state laws lies with employers. It suggests that AI oversight within companies will need to become the norm.
As AI in hiring and recruiting emerges as a preeminent technological force, employers should consider following best practices.
- To start, transparency with AI is key. Employers should obtain written consent from applicants before implementing an AI-based tool.
- Employers should disclose what this tool measures, its methods, and how an applicant may request reasonable disability accommodations.
- They must vet AI vendors by selecting a company that has rigorously tested its products, analyzed its results, and taken constructive steps to mitigate biases.