Skip to Content
Policy

The AI hiring industry is under scrutiny—but it’ll be hard to fix

November 7, 2019
computer
computer

The Electronic Privacy Information Center (EPIC) has asked the Federal Trade Commission to investigate HireVue, an AI tool that helps companies figure out which workers to hire. 

What’s HireVue? HireVue is one of a growing number of artificial intelligence tools that companies use to assess job applicants. The algorithm analyzes video interviews, using everything from word choice to facial movements to figure out an “employability score” that is compared against that of other applicants. More than 100 companies have already used it on over a million applicants, according to the Washington Post

What’s the problem? It’s hard to predict which workers will be successful from things like facial expressions. Worse, critics worry that the algorithm is trained on limited data and so will be more likely to mark “traditional” applicants (white, male) as more employable. As a result, applicants who deviate from the “traditional”—including people don’t speak English as a native language or who are disabled—are likely to get lower scores, experts say. Plus, it encourages applicants to game the system by interviewing in a way that they know HireVue will like. 

What’s next? AI hiring tools are not well regulated, and addressing the problem will be hard for a few reasons. 

—Most companies won’t release their data or explain how the algorithms work, so it’s very difficult to prove any bias. That’s part of the reason there have been no major lawsuits so far. The EPIC complaint, which suggests that HireVue’s promise violates the FTC’s rules against “unfair and deceptive” practices, is a start. But it’s not clear if anything will happen. The FTC has received the complaint but has not said whether it will pursue it. 

—Other attempts to prevent bias are well-meaning but limited. Earlier this year, Illinois lawmakers passed a law that requires employers to at least tell job seekers that they’ll be using these algorithms, and to get their consent. But that’s not very useful. Many people are likely to consent simply because they don’t want to lose the opportunity.

—Finally, just like AI in health or AI in the courtroom, artificial intelligence in hiring will re-create society’s biases, which is a complicated problem. Regulators will need to figure out how much responsibility companies should be expected to shoulder in avoiding the mistakes of a prejudiced society. 

Deep Dive

Policy

Eric Schmidt: Why America needs an Apollo program for the age of AI

Advanced computing is core to the security and prosperity of the US. We need to lay the groundwork now.

The depressing truth about TikTok’s impending ban

TikTok is a bellwether for Chinese companies that want to go global, and they are increasingly feeling unwelcome in the US.

Hong Kong is targeting Western Big Tech companies in its ban of a popular protest song

And it’s already made some YouTube videos of "Glory to Hong Kong" disappear.

Hong Kong is safe from China’s Great Firewall—for now

A recent court ruling shows how it’s trying to tread a fine line in terms of internet freedom.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.