What’s Wrong with Job Interviews, and How to Fix Them

Published February 15, 2018


I needed to hire a new salesperson, and one resume stood out like a sore thumb. The applicant, Ari, was a math major and built robots in his spare time—clearly not the right skill set for sales. But my boss thought Ari looked interesting, so I called him in for an interview. Sure enough, he bombed it.

I reported back to my president that although Ari seemed like a nice guy, during the 45-minute interview, he didn’t make any eye contact. It was obvious that he lacked the social skills to build relationships with clients.

I knew I was in trouble when my president started laughing. “Who cares about eye contact? This is a phone sales job.”

We invited Ari back for a second round. Instead of interviewing him, a colleague recommended a different approach, which made it clear that he would be a star. I hired Ari, and he ended up being the best salesperson on my team. I walked away with a completely new way of evaluating talent. Ever since, I’ve been working with organizations on rethinking their selection and hiring processes.

Interviews are terrible predictors of job performance. Consider a rigorous, comprehensive analysis of hundreds of studies of more than 32,000 job applicants over an 85-year period by Frank Schmidt and Jack Hunter. They covered more than 500 different jobs—including salespeople, managers, engineers, teachers, lawyers, accountants, mechanics, reporters, farmers, pharmacists, electricians and musicians—and compared information gathered about applicants to the objective performance that they achieved in the job.

After obtaining basic information about candidates’ abilities, standard interviews only accounted for 8% of the differences in performance and productivity. Think about it this way: imagine that you interviewed 100 candidates, ranked them in order from best to worst, and then measured their actual performance in the job. You’d be lucky if you put more than eight in the right spot.

Interviewer biases are one major culprit. When I dismissed Ari, I fell victim to two common traps: confirmation bias and similarity bias.

Confirmation bias is what leads us to see what we expect to see—we look for cues that validate our preconceived notions while discounting or overlooking cues that don’t match our expectations. Since I had already concluded that Ari wasn’t cut out for sales, I zeroed in on his lack of eye contact as a signal that I was right. It didn’t occur to me that eye contact was irrelevant for a phone sales job—and I didn’t notice his talents in building rapport, asking questions and thinking creatively. Once we expect a candidate to be strong or weak, we ask confirming questions and pay attention to confirming answers, which prevents us from gauging the candidate’s actual potential.

Why did I form this expectation in the first place? Similarity bias.

Extensive research shows that interviewers try to hire themselves: we naturally favor candidates with personalities, attitudes, values and backgrounds to our own. I was a psychology major with hobbies of springboard diving, performing magic and playing word games, and I had done the sales job the previous year. Ari was a robot-building math major, so he didn’t fit my mental model of a salesperson. He wasn’t Mini-Me.

After writing Blink, Malcolm Gladwell became so concerned about his own biases that he removed himself from the processing of interviewing assistants altogether. And even if we take steps to reduce interviewer bias, there’s no guarantee that applicants will share information that accurately forecasts their performance.

One challenge is impression management: candidates want to put their best foot forward, so they tend to give the answers that are socially desirable rather than honest.

Another challenge is self-deception: candidates are notoriously inaccurate about their own capabilities. Consider these data points summarized by psychologist David Dunning and colleagues:

(1) High school seniors: 70% report having “above average” leadership skills, compared with 2% “below average,” and when rating their abilities to get along with others, 25% believe they were in the top 1% and 60% put themselves in the top 10%.

(2) College professors: 94% think they do above-average work.

(3) Engineers: in two different companies, 32% and 42% believe their performance was in the top 5% in their companies.

(4) Doctors, nurses, and surgeons: for treating thyroid disorders, handling basic life support tasks and performing surgery, there is no correlation between what healthcare professionals say they know and what they actually know.

Overall, Dunning and colleagues estimate that employees’ self-ratings only capture about 8% of their objective performance. Also, the data show that the most unskilled candidates are the least aware of their own incompetence. The less you know in a given domain, the less qualified you are to judge excellence in that domain. The punch line: candidates are not reliable sources of information about their talents. As Timothy Wilson concludes in Strangers to Ourselves, “people often do not know themselves very well.”

The good news is that interviews can be improved.

Want to learn more? Read the rest of the article on medium.com.

About the Author(s)
Adam Grant

Adam Grant

Professor & Author

Wharton School of Business

Adam Grant is the youngest tenured professor at Wharton. Named one of BusinessWeek’s favorite professors and one of the world’s 40 best business professors under 40, he is the best-selling author of Give and Take: A Revolutionary Approach to Success and Originals: How Non-Conformists Move the World. His pioneering research has led to increased performance and reduced burnout among business professionals—concluding that a giving mindset might be the best path to getting ahead.

Years at GLS 2015