Job hunters face a new hurdle: Impressing AI

(CBSNews) -- If you're on the hunt for a new job, there's a new twist to the hiring process that's increasingly common: Getting judged by artificial intelligence.

Major businesses like Unilever and Ikea are turning to AI programs to sniff out suitable applicants — and predict future job performance — partly as a way to cut down on the time needed to hire employees.

Supporters say the tech can help companies find better job candidates and eliminate some of the inherent bias that executives may not even be aware of, such as leaning toward candidates who are white or male.

But critics say there are issues with relying on algorithms to make complex decisions. One Princeton professor, Arvind Narayanan, is questioning whether AI can accurately predict job success, calling such claims "snake oil."

And Electronic Privacy Information Center counsel John Davisson calls the AI programs are a "nightmarish combination of privacy violations and outcomes that aren't based on any easily understood criteria." His group earlier this month petitioned the Federal Trade Commission to start regulating AI's use in the job market and other business applications.

At stake are the roughly 7 million job openings that employers seek to fill each month. Traditional gatekeepers such as human resources professionals are increasingly turning to AI to review resumes — no small task when employment site Glassdoor says the average job opening receives about 250 resumes. Some are also relying on AI programs to score applicants' video interviews, with the goal of skimming the cream from the top.

Some companies say they see results, with one case study from Unilever finding that an AI analysis from a company called HireVue saved the company $1 million by cutting down on the time needed to sort through 250,000 applications. Relying on AI to cull the applicant pool also helped diversify new hires by 16%, it found.

Yet the lack of transparency about the inner workings of these AI applications, as well as the vast amount of data they collect, are raising concerns among lawmakers and privacy experts. Of particular concern — that these programs could unfairly evaluate certain types of applicants, such as minorities and women.

"It's like anything else — wine, music and food. Some of it is really good and some of it is really bad," said Tomas Chamorro-Premuzic, chief talent scientist at staffing and recruitment company ManpowerGroup, which is also an adviser to HireVue. "The big challenge and question of AI in the world of talent is: Will algorithms be better at knowing us than we do ourselves?"

Gaining traction in workplace

So far, there appears to be more questions than answers, even as the technology gains traction in the workplace. While there's no federal government tracking or oversight of AI, surveys of human resource executives provide insight into AI's usage in the job market.

Almost 7 in 10 recruiters and hiring managers told LinkedIn in 2018 that AI applications help them save time, according to a survey of 8,800 executives, while about 4 in 10 said AI tools help guard against bias in the hiring process. And employment site Glassdoor says it's increasingly likely that job candidates will encounter AI when they search for work today. 

It's a growing field with dozens of companies offering different AI flavors to businesses. It's also attracting venture capital, with hundreds of millions of investments plugged into firms such as HireVue, Pymetrics and Mya, according to CrunchBase.

How it works

Some job applicants might not even realize they've been given the once-over by artificial intelligence. At the most basic level, AI programs analyze resumes and job applications to screen candidates, winnowing the field to top candidates.

Other AI programs are more interactive, although some applicants still might not know they're dealing with an algorithm. Consignment shop ThredUp, for example, uses an AI program to text job candidates to schedule job interviews, according to Glassdoor. Other companies ask candidates to play online games, including one exercise that asks applicants to click on objects that are red balls. The data are then fed through an AI program that analyzes responses to assess a candidate's abilities.

The AI program that's sparking perhaps the most concern is the video interview, where candidates are taped as they respond to recruitment questions posed via webcam. An AI tool then examines the video, analyzing everything from facial gestures to vocabulary, to determine whether the candidate is a good fit.

HireVue, one of the biggest companies in the field, said its AI assessment is partially based on data from companies' top performers: By analyzing vocabulary used by top workers, for instance, the AI program can pick out candidates whose word choices and expressions best match those high performers. It also relies on years of research into industrial and organizational psychology, which focuses on how people behave within organizations.

"We are judging the answers that the interviewee is giving," said Kevin Parker, CEO of HireVue. "In a 20-minute interview, we can collect a lot of data. We have no idea who the candidate is, but we can collect a lot of data about vocal variation, personal pronouns" and other information to make an assessment, he added.

Parker said HireVue's AI programs are screened for bias, which helps level the playing field for job applicants. The AI models tend to be most effective for jobs where there's a lot of data to draw from, he added.

In other words, AI isn't designed to find the best CEO — since C-suite executives are relatively uncommon in the workforce — but for rank-and-file workers like salespeople.

Is it accurate?

Companies in the AI space say their programs save time by weeding out low-quality applicants and singling out the best potential hires. But artificial intelligence has its limits — and probably isn't more accurate at predicting job performance than tried-and-true methods, according to Princeton's Narayanan, an associate professor of computer science.

Narayanan points to an academic study that relied on family data to predict future outcomes in a number of areas, such as a child's likely academic performance and future material hardship. But the study's AI analysis failed to outperform old-fashioned linear regression in predicting future struggles or successes, he said.

While AI can prove useful for simpler perception-based tasks, such as speech-to-text translation or identifying plagiarized material, it "can't predict social outcomes," Narayanan concluded.

At the same time, he warned, AI creates risks by transferring massive amounts of personal data to tech firms and placing a greater reliance on applications that lack transparency.

Those concerns are echoed by EPIC, which in November asked the Federal Trade Commission to investigate HireVue. It claims HireVue's algorithms are "opaque" and that its "secret analysis of biometric data thus causes substantial privacy harms to job candidates."

The FTC told CBS MoneyWatch it had received both the letter and the petition, but declined additional comment.

HireVue said it believes the complaint is without merit. "The company upholds the highest levels of rigor and ethics as its employees work every day to increase fairness and objectivity in the hiring process," it told CBS MoneyWatch.

"Disappointed"

HireVue CEO Parker said he was "disappointed" with Narayanan's assessment, and said his company reached out to him to discuss his concerns but was "rebuffed."

Asked whether he rebuffed HireVue's overture, Narayanan forwarded his email exchange with the company to CBS MoneyWatch. Instead of a private discussion, he wrote, he urged HireVue to publish its research, "ideally in a peer-reviewed forum, and discuss it openly." He added in the email, "I'm not sure a private conversation is the best medium. There's a high level of public interest in AI bias since the issue affects everyone."

In a statement, HireVue said it "was surprised that Dr. Narayanan would not welcome the opportunity for a one-on-one discussion on the facts and established science supporting HireVue's approach to assessments and AI before making public claims about efficacy and validity."

The issue of AI in hiring is gaining more attention from lawmakers. In January, a new Illinois law went into effect that requires companies to disclose that artificial intelligence will be used to assess candidates' video interviews. Employers also must explain how the AI tool works, ask for consent from the job candidate, and destroy the video and any copies within 30 days if a job applicant requests it.

Illinois State Rep. Jaime Andrade Jr., who sponsored the bill, told the Chicago Tribune last month: "The technology hasn't been vetted fully yet. There are some concerns regarding bias. [The software] is only as good as the data it is given."

Is it really less biased?

HireVue and other hiring experts say AI has the potential to be less biased than human interviewers. Algorithms can strip out gender, race or any other protected group when analyzing candidates. Yet when algorithms go wrong, they can go very wrong, such as in the case of Amazon's recruitment engine.

In that now-infamous instance, Amazon scrapped the AI application in 2018 after realizing that the program was excluding women applicants from the pool because the model was based on 10 years of data from job applicants, who tended to be men. In other words, the AI "learned" the biased view that men were a better fit than women, reinforcing gender stereotypes in tech.

"Even if it doesn't look at sensitive attributes, like names that can be a proxy for gender, programs can pick up on this unknowingly," said Glen Cathey, senior vice president and head of digital strategy and innovation at Randstad, a recruitment agency. "Being responsible and ethical in AI, not just in recruitment, is the measurement element."

EEOC standard

Companies that rely on AI need to measure and test for bias, and that's something HireVue studies closely, said Lindsey Anderson Zuloaga, director of data science at the company.

"Whatever biases it has, it could affect a lot of people — and that's true of who gets credit, who gets parole," she said, referring to other fields where AI has been used to make determinations. "In any pre-hire assessments, you need to look at how different demographic groups are being scored, and if there are differences that may be evidence of adverse impact."

HireVue screens for results where any group, such as different genders or races, score less than 80% of the top-scoring group, which is a yardstick for bias used by the U.S. Equal Employment Opportunity Commission. For example, that would happen if an AI program singles out 50 men as solid candidates, but only 30 women.

"We're lucky in this space in that our notion of what is fair is defined by law," Zuloaga said. "We build the model, then we train the algorithm, then look at how is it scoring people within the group."

Hacks for AI?

Job applicants are perhaps justifiably nervous about facing an AI in a job interview, which can be nerve-wracking even with other humans. It's no surprise that YouTube tutorials have sprung up to help job candidates learn how to ace pre-recorded interviews by HireVue and others. The tips include how to set up your webcam, how to look good on the video and how to keep cool under pressure — not exactly different advice than what a job hunter would hear about an interview with a real person.

In the end, there may be no secret sauce for acing an AI interview, partly because the algorithm is a black box. Even so, human judgment is typically involved in the hiring process after AI programs have screened for the best candidates, HireVue's Parker said.

"We're the first step in a process — the next step is an in-person interview," he added. "It's our desire that there is always a human involved."

Share this article: