New York • Artificial intelligence (AI) promises to make hiring an unbiased utopia.
There’s certainly plenty of room for improvement. Employee referrals, a process that tends to leave underrepresented groups out, still make up a bulk of companies’ hires. Recruiters and hiring managers also bring their own biases to the process, studies have found, often choosing people with the “right-sounding” names and educational background.
Across the pipeline, companies lack racial and gender diversity, with the ranks of underrepresented people thinning at the highest levels of the corporate ladder. Fewer than 5% of CEOs at Fortune 500 companies are women, and that number will shrink further in October when PepsiCo Inc CEO Indra Nooyi steps down. Racial diversity among Fortune 500 boards is almost as dismal, as four of the five new appointees to boards in 2016 were white. There are only three black CEOs in the same group.
“Identifying high-potential candidates is very subjective,” said Alan Todd, CEO of CorpU, a technology platform for leadership development. “People pick who they like based on unconscious biases.”
AI advocates argue the technology can eliminate some of these biases. Instead of relying on people’s feelings to make hiring decisions, companies such as Entelo and Stella.ai use machine learning to detect the skills needed for certain jobs. The AI then matches candidates who have those skills with open positions. The companies claim not only to find better candidates, but also to pinpoint those who may have previously gone unrecognised in the traditional process.
Stella’s algorithm only assesses candidates based on skills, for example, said founder Rich Joffe. “The algorithm is only allowed to match based on the data we tell it to look at. It’s only allowed to look at skills, it’s only allowed to look at industries, it’s only allowed to look at tiers of companies.” That limits bias, he said.
Entelo today released Unbiased Sourcing Mode, a tool that further anonymises hiring. The software allows recruiters to hide names, pho- tos, school, employment gaps and markers of someone’s age, as well as to replace gender-specific pronouns — all in the service of reducing various forms of discrimination.
AI is also being used to help develop internal talent. CorpU has formed a partnership with the University of Michigan’s Ross School of Business to build a 20-week online course that uses machine learning to identify high- potential employees. Those ranked highest aren’t usually the individuals who were already on the promotion track, Todd said, and often exhibit qualities such as introversion that are overlooked during the recruitment process. “Human decision-making is pretty awful,” said Solon Borocas, an assistant professor in Cornell’s Information Science department who studies fairness in machine learning. But, we shouldn’t overestimate the neutrality of technology, either, he cautioned.
Borocas’ research has found that machine learning in hiring, much like its use in facial recognition, can result in unintentional discrimination. Algorithms can carry the implicit biases of those who programmed them. Or, they can be skewed to favour certain qualities and skills that are overwhelmingly exhibited among a given data set. “If the examples you’re using to train the system fail to include certain types of people, then the model you develop might be really bad at assessing those people,” Borocas explained.
Not all algorithms are created equal — and there’s disagreement among the AI community about which algorithms have the potential to make the hiring process more fair. One type of machine learning relies on programmers to decide which qualities should be prioritised when looking at candidates. These “supervised” algorithms can be directed to scan for individuals who went to Ivy League universities or who exhibit certain qualities, such as extroversion.
“Unsupervised” algorithms determine on their own which data to prioritise. The machine makes its own inferences based on existing employees’ qualities and skills to determine those needed by future employees. If that sample only includes a homogeneous group of people, it won’t learn how to hire different types of individuals — even if they might do well in the job.
Companies can take measures to mitigate these forms of programmed bias. Pymetrics, an AI hiring start-up, has programmers audit its algorithm to see if its giving preference to any gender or ethnic group. Software that heavily considers ZIP code, which strongly correlates with race, will likely have a bias against black candidates, for example. An audit can catch these prejudices and allow programmers to correct them. — Bloomberg