Title VII and the Use of AI in Employment Decisions

Employers are increasingly turning to artificial intelligence (“AI”) for assistance in making employment decisions, and although AI can eliminate disparate treatment, employers should be aware of the potential for disparate impact. Title VII of the Civil Rights Act of 1964 (“Title VII”) prohibits discrimination on the basis of race, color, religion, sex (including pregnancy, sexual orientation, and gender identity), or national origin in employment practices (recruiting, hiring, monitoring, transferring, evaluating, terminating).

While New York City is the only jurisdiction that regulates the use of AI in employment decisions, there is EEOC guidance on the use of AI in the workplace and as a result of President Biden’s October 30, Executive Order we expect the Secretary of Labor to issue best practices around the use of AI in employment decisions soon.

New York City

New York City Local Law 144 regulates the use of automated employment decision tools. Automated employment decisions tools are “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”

Under Local Law 144 employers or employment agencies that use an automated employment decision tool to screen a candidate or employee for an employment decision must:

  • Conduct a bias audit no more than one year before using the tool;
  • Make the results of the most recent bias audit publicly available on the employer or employment agency’s website before using the tool; and
  • Notify each candidate or employee who applied for a position of the following:
    • that an automated employment decision tool will be used in connection with the evaluation of the employee or candidate;
    • the job qualifications and characteristics that the automated employment decision tool will use the assessment of the employee or candidate; and
    • information about the type of data collected for the automated decision tool, the source of such data, and the employer or employment agency’s data retention policy.

The EEOC’s Guidance on the use of AI in Employment Decisions

Earlier this year, the EEOC issued guidance meant to address whether and how to monitor new algorithmic decision-making tools in employment selection procedures. Examples of algorithmic decision-making tools include:

  • resume scanners that prioritize applications which use pre-determined key words;
  • employee monitoring software that can rate employees on the basis of their output;
  • “virtual assistants” or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements;
  • video interviewing software that evaluates candidates based on their facial expressions and speech patterns; and
  • testing software that provides “job fit” scores for applicants or employees regarding their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on their performance on a game or on a more traditional test.

This year’s guidance refers employers to the EEOC’s 1978 Uniform Guidelines on Employee Selection Procedures (“Guidelines”). Although the Guidelines pre-date AI, this year’s EEOC guidance makes it clear that “selection procedures” are any “measure, combination of measures, or procedure” that are used as the basis for an employment decision, including AI. The Guidelines provide guidance for employers about how to determine whether their tests and other selection procedures used as a basis for employment decisions create a disparate impact in violation of Title VII. Employment decisions include, but are not limited to hiring, promoting, and demoting. Disparate impact occurs when a seemingly neutral test or selection criteria disproportionately excludes  individuals based on a protected category. As a general rule of thumb, employers may rely on the four-fifths rule in determining whether the selection procedure creates a disparate impact. Under the four-fifths rule, a selection rate for any protected category that is less than four-fifths (80%) of the selection rate for the group with the highest selection rate would generally be regarded as a disparate impact. This year’s EEOC guidance provides the following example:

In a personality test scored by an algorithm, the selection rate for Black applicants was 30% and the selection rate for White applicants was 60%. The ratio of the two rates is thus 30/60 (or 50%). Because 30/60 (or 50%) is lower than 4/5 (or 80%), the four-fifths rule says that the selection rate for Black applicants is substantially different than the selection rate for White applicants in this example, which could be evidence of discrimination against Black applicants.

It is worth noting that compliance with the four-fifths rule is just a general rule of thumb and does not guarantee that a selection procedure does not have an adverse impact for purposes of Title VII. That said, if an employer determines that the selection procedure does have an adverse impact, the employer must determine whether the selection procedure is “job related and consistent with business necessity.” Employers must also consider whether there is a suitable alternative selection procedure. Especially as it relates to the development and use of AI, where there is an alternative selection procedure that will serve the same purpose, but eliminates disparate impact, employers should use the selection procedure that eliminates disparate impact.

Key Takeaways

The law and guidance on the use of AI in employment is still being developed. Indeed, President Biden’s October 30 Executive Order instructed the Secretary of Labor, within 180-days, to develop and publish best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits.

In the meantime, employers who use AI in employment decisions would be well advised to ensure that the use of AI does not create a disparate impact. To achieve this, employers may test the AI software and determine whether it results in a disparate impact before relying on the AI in making employment decisions. Similarly, employers may periodically test AI software that is being used in employment decisions for disparate impact. When determining whether new or already in use AI is creating a disparate impact, employers may consider relying on the four-fifths rule. If the tests or monitoring show that there is a disparate impact, employers will want to consider if there is another selection process that will eliminate the disparate impact. Finally, employers may consider keeping a record of their final analysis.

In conclusion, while this blog is limited to Title VII, employers should consider all potential implications of using AI in employment decisions. For example, the use of AI in employment decisions could involve the Age Discrimination in Employment Act, just see the iTutorGroup $365,000 settlement in response to an EEOC lawsuit alleging that its AI software rejected more than 200 older applicants. Likewise, the use of AI in employment decisions could implicate the Americans with Disabilities Act, for which the EEOC has similarly issued guidance.

Ashley Mitchell
https://www.connmaciel.com

Ashley D. Mitchell is an Associate in the Chicago office of Conn Maciel Carey LLP supporting both the OSHA and Labor and Employment practice groups. Ms. Mitchell represents and advises clients in a broad range of employment issues involving the employer-employee relationship including wage and hour disputes, Title VII discrimination claims, and compliance with the Americans with Disabilities Act (ADA). Ms. Mitchell also counsels employers on workplace policies and procedures, harassment training, and employee handbooks.



Leave a Reply

Your email address will not be published. Required fields are marked *