hand choosing a person out of a puzzle Getty Images

Beware of AI Bias in the Recruitment Process

Hiring by algorithm has its advantages, but the pitfall of unintentional AI bias is one to avoid.

On Nov. 6, the Electronic Privacy Information Center filed a complaint with Federal Trade Commission, alleging that recruiting company HireVue has committed unfair and deceptive practices that violate the FTC Act – specifically, that the recruiting company’s claims that it does not use facial recognition technology are false. EPIC also accused HireVue of failing to comply with baseline standards for artificial intelligence decision making, including the OECD AI Principles and the Universal Guidelines for Artificial Intelligence. 

The complaint isn’t the first that EPIC has brought to the FTC, and it’s not the first issue of AI bias that has arisen with the use of artificial intelligence in hiring and recruiting. The accusations about HireVue are just the latest situation in a field where AI and machine learning (ML) have the potential to streamline the process of recruiting and hiring new employees, but could also exacerbate biases and privacy issues that are already part of hiring practices.

Humans making decisions about humans, as prevalent in most hiring processes today, is particularly fascinating because it is simultaneously relatable and opaque,” said Jen Hwang, chief strategy officer at Tilr. “And further, it almost always requires mutual opt-in, which means hiring is a series of incremental decisions made by all parties throughout its lifecycle.” 

Right now, AI and ML in hiring are largely used for repetitive or bulk tasks such as scheduling screening calls or looking for qualifications in a group of applicants. This can be good – it frees up humans to do work like interviewing candidates – but when the algorithms powering those decisions are faulty or biased, there can be problems.

Solving Recruitment Problems

At its best, artificial intelligence both makes recruitment easier and helps employers find candidates they otherwise might not.

AI addresses one of the biggest challenges of recruiting: As a recruiter, you want to cast the widest possible net to find a diverse group of candidates, but you also don’t want to drown in applicants, with many of them not being a good fit,” said Montra Ellis, director of strategy and innovation at Ultimate Software, an HR tech company. Machine learning makes it easier to find interesting candidates from a large pool and learns from a recruiter’s manual actions to refine the search, Ellis said.

When an algorithm’s ML capabilities can continuously improve upon outcomes, every part of the job-seeking process can become more efficient, said Hwang. “AI drives the ability for organizations to process volumes of information faster and, in turn, distill the insights into decision support and actions,” she said.

Some also believe that software represents not the continuation of bias in hiring but the end of it. “I believe software is critical to solving the hiring bias conundrum,” Ellis said – while also acknowledging that there are recent examples to show that isn’t how things always work out.

The Problem of AI Bias

Bias tends to carry a negative undertone, Hwang acknowledged, but it is not necessarily deliberate or representative of a desire to be exclusionary. The first step is recognizing that AI bias exists, she said, then considering alternative approaches in hiring practices that mitigate it.

That can look a lot of different ways, she said. For example, if a verbal Q&A interview isn’t the best way to hire for the position and could make things harder for good candidates, why not do things differently with technology’s help?

“If the goal of an interview is to assess fit-in capabilities and alignment in values, then consider redefining the evaluation process from a verbal discussion to a working contribution which can be assessed based on criteria set up in an AI/ML system,” Hwang suggested.

Changing the input data for the algorithms powering hiring software is another option, Hwang said. The program could be designed to look for specific skills, for example, and not to examine how polished a resume may be. It could also look for a type of required education, like a program keyword, and not identify school name.

“These are data points that will eventually become relevant but are arguably tremendous exposures to introducing bias in initial applicant considerations,” Hwang said. “And because humans directly influence how machines learn, it is imperative to be mindful of the ethics and implications of all inputs, whether included or excluded, to ensure that mitigating bias is always top of mind.”

It’s also important to consider where the data used to develop AI and ML hiring systems comes from. “Companies simply cannot build talent acquisition software exclusively using their own data,” Ellis said. “A company’s data should be enhanced or supplanted by multiple external aggregated data sets.”

There is bias involved in deciding who looks like a “good” candidate, Ellis said, even if that bias is not applied consciously.

“As companies continue to hire over time, a self-reinforcing loop is created with confirmation bias or similarity-attraction effect,” Ellis said. “Unfortunately, the results and standards of who we hire and how often evolve from this entirely ‘human’ but flawed process.”

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish