The RPO Voice: Insights for the RPO Marketplace

Current Valuable Information to Keep your Hiring AI Compliant

Written by Guest | Tue, Jul 26,2022 @ PM

Is your use of artificial intelligence in hiring decisions compliant with current employment laws? Employers’ use of software, algorithms, and artificial intelligence has drawn the attention of the Equal Employment Opportunity Commission (“EEOC”) and the Department of Justice (“DOJ”). Both agencies recently issued guidance alerting employers to the potential for discrimination in the use of computer-assisted decisions in employment.

RPOA’s Executive Director, Lamees Abourahma, recently hosted a members’ only event featuring Annemarie DiNardo Cleary, Esq. and Brendan Horgan, Esq., two employment attorneys with Eckert Seamans Cherin & Mellott, LLC. Attorneys Cleary and Horgan discussed how employers could run afoul of existing employment laws through their use of software, algorithms, and artificial intelligence to make hiring and employment decisions, as well as practices and policies employers can consider to comply with the law when using those tools.

If your business currently uses software, algorithms, or artificial intelligence, or you are interested in this topic, you can watch the entire on-demand webinar here . The following summary of Attorneys Cleary and Horgan’s presentation is intended to keep readers current on developments in the law. It is not intended to be legal advice. As laws and regulations may have changed following the date of presentation, please contact Annemarie DiNardo Cleary acleary@eckertseamans.com, or Brendan Horgan at bhorgan@eckertseamans.com, or any other employment attorney with whom you are working, to discuss particular circumstances.

People Analytics in Employment: Promising Practices to Avoid ADA Claims

The increased use of software, algorithms and artificial intelligence in the hiring process in recent years led the EEOC and DOJ to examine whether such computer-assisted decisions might have a discriminatory impact. Increasingly, employers use automated tools to assess applicants and employees. In the hiring process these tools include applicant tracking software and resume-screening tools to assist in evaluating resumes and identifying applicants who possess the skills sought for open positions, online games or tests to assess and advance applicants in the hiring process, and an automated interview process in which applicants’ responses to a set of interview questions are recorded and assessed against a predetermined set of metrics. During the pandemic, while employees worked from home, many companies implemented employee monitoring or talent management software to assess employee productivity for the purposes of determining promotions, raises, bonuses, or retention. When used with care, these automated tools can streamline processes, increase hiring and retention success, and control for bias. But these tools also can unintentionally perpetuate bias and discriminatory impact.

The EEOC’s and the DOJ’s recently issued guidance alerts employers to the potential for discrimination in the use of these tools in employment-related decisions. The agencies focused on the potential discriminatory impact on applicants and employees with disabilities but made plain the broader scope of concerns regarding the use of automated tools by employers. Both agencies identified promising practices to mitigate potential discriminatory impact. Several states and local governments also have enacted or are considering legislation aimed at addressing the use of software, algorithms, and artificial intelligence in employment and other areas.

Inadvertent Discrimination in Automation

Although many automated employment-related screening tools were designed to combat implicit bias, an employer can inadvertently discriminate when using analytics by relying on biased data or metrics, failing to regularly audit the results for discriminatory impact, measuring an impermissible quality, or failing to offer reasonable accommodations.

If the data or metrics on which a process is based are biased, the process or algorithm may create discriminatory outcomes. For example, an employer that uses a resume-screening tool to filter out applicants with gaps in employment may disproportionately screen out women or military dependents. If an employer uses an automated video interviewing system that assesses facial expressions and body movements, the software may return a lower score for an applicant whose disability affects the manner in which he or she responds to interview questions without regard to the applicant’s ability to perform the essential functions of the position.

Moreover, if an applicant or employee is unaware of the use of technology in the evaluation process or the skills the technology is assessing, the applicant or employee may not have a meaningful opportunity to request an alternate means of assessment as a reasonable accommodation.

Laws and Regulations to Consider

As with any employment process, employers must consider federal, state and local anti-discrimination laws when using software, algorithms, and artificial intelligence. Though the recent guidance from the EEOC and the DOJ focused on the Americans with Disabilities Act (the “ADA”), employers must keep in mind that a broader set of federal laws are implicated, including Title VII of the Civil Rights Act of 1964, the Pregnancy Discrimination Act, the Age Discrimination in Employment Act, and the Genetic Information Nondiscrimination Act.

Many states also have enacted their own anti-discrimination laws, some of which include protected categories not found in federal laws, such as military dependents. Employers should be aware of state and local laws that apply to their applicant pool and employees, which includes the states in which applicants and employees live and work.

EEOC Guidance:

The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (the “EEOC Guidance”).

In the fall of 2021, the EEOC launched an initiative to analyze employers’ use of artificial intelligence and algorithmic decision-making tools in employment due to concerns that these tools might perpetuate bias and discrimination in employment. On May 12, 2022, the EEOC issued the EEOC Guidance to address the use of these tools in the context of the ADA. The EEOC Guidance identifies the technologies used, explains how those technologies might violate the ADA, and provides several “Promising Practices” for employers to avoid violating the ADA.

EEOC Identified ways in which the use of automated computer processes might violate the ADA:

In the EEOC Guidance, the EEOC acknowledged that using automation in hiring and other employment decisions can lead to cost and time savings, increased objectivity, and decreased bias. It also identified several common ways in which employers can violate the ADA when using such tools. An employer whose automated software, algorithm, or AI fails to offer an accommodation process, screens out individuals with disabilities, or makes impermissible medical inquiries may violate the ADA. The EEOC was particularly focused on an applicant or employee with a disability who might be unfairly disadvantaged by technology. To do that, the EEOC suggested several “Promising Practices.”

EEOC’s “Promising Practices:”

  • Start with the right data

  • Watch (audit) for discriminatory outcomes

  • Disclose fact of use and how you are using automated processes to applicants / employees

  • Provide clear instructions for accommodation process

  • Remain open to all requests for reasonable accommodation

  • Limit assessment to job related criteria

  • Train staff to recognize requests for an accommodation

DOJ Guidance:

Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring (the “DOJ Guidance”)

On the same day the EEOC issued its guidance, the DOJ issued the DOJ Guidance addressing the use of algorithms and artificial intelligence in the hiring process and employment decisions by state and local government employers. The DOJ Guidance cautions state and local government employers that screening out persons with disabilities or making impermissible medical inquiries via automated technology can violate the ADA, and that a reasonable accommodation process is required. The DOJ Guidance reminds employers that technologies must evaluate job skills, not disabilities, and sets forth the following recommended practices to avoid discrimination:

  • telling applicants about the type of technology being used and how applicants will be evaluated;

  • providing enough information to applicants so that they can decide whether to seek a reasonable accommodation; and

  • providing and implementing clear procedures for requesting reasonable accommodations and ensuring that requesting an accommodation does not hurt the applicant’s chance of getting the job.

State and Municipal Laws:

Illinois, Maryland, and New York City have enacted laws regulating the use of software, algorithms, and artificial intelligence in the employment context. While each requires some form of advance notice to applicants of the tools in use, the laws take varying approaches to the issue.

Illinois – Artificial Intelligence Video Interview Act (enacted 2019)

820 ILCS 42

  • Regulates the use of artificial intelligence to analyze videos interviews

  • Applicants must be notified in advance of use of the tool and what characteristics it will evaluate

  • Distribution of videos is limited to those whose expertise is necessary to the applicant’s evaluation

  • All copies of videos must be deleted within 30 days of an applicant’s request

  • Employers relying solely on AI video analysis to select applicants for in-person interviews must collect and report annually on the race and ethnicity of applicants, specifically, those who are hired, those who are offered in-person interviews, and those who are not offered interviews

Maryland – Rules for use of facial recognition technology (enacted 2020)

Md. Code, Lab. & Empl. § 3-717

Prohibits the use of facial recognition technology during pre-employment job interviews unless:

  • Applicant signs written consent and waiver in advance

  • Waiver includes applicant’s name, date of interview, consent to use of facial recognition, and that applicant has read the waiver

New York City – Rules for “Automated Employment Decision Tools” (goes into effect January 1, 2023)

Amendment to NYC Admin. Code § 1.5-20

Prohibits use of “automated employment decision tool” unless:

  • Tool undergoes annual bias audit by an independent auditor

  • Audit results are publicly available on employer’s website

  • No less than 10 days before use of tool, all applicants are notified:

  • Tool will be used;

  • Job qualifications to be assessed; and

  • Data to be collected

Possible Future Laws

Legislation to regulate the use of automated tools in employment is pending in more than 15 states and localities. Of note, two bills proposed in New York and one in the District of Columbia would establish civil penalties and create possible private causes of action against employers for the discriminatory use of these tools. Proposed laws in California, New York, and the District of Columbia would require advanced notice to applicants, public disclosures to government agencies concerning the use of automated tools, and mandatory annual bias audits with results publicly available.

Take Aways

The EEOC and DOJ both noted that their respective guidance was not new policy, but rather was issued to provide clarity to employers regarding existing requirements under existing law. Employers should interpret this as notice that the EEOC and DOJ are focused on the use and potential discriminatory impact of automated tools not only under the ADA, but also under other anti-discrimination laws.

As with any other employment process under scrutiny, employers should have written policies governing the use of analytics in employment decisions and should maintain detailed records concerning the decision-making process. Automated tools should be designed to test for abilities necessary for the advertised position, developed with individuals with disabilities in mind, and audited regularly for biased outcomes. Employers using automated tools in the hiring process and for other employment decisions also must consider a regulatory environment that includes different requirements in different states. Don’t let your use of people analytics put you at risk of a discrimination claim.