Salesforce Staffing
Industry Insights

Back arrowBack

    Stop Letting Hiring Managers Run Behavioral Interviews.

    Written By:  Josh Matthews
    A child pulling a burnt pizza from a wood-fired oven — a visual metaphor for untrained hiring managers making behavioral assessments

    A Candidate Did Everything Right. They Still Got It Wrong.

    A candidate went through multiple rounds of interviews with one of our clients. In the second round, he demonstrated solid knowledge but struggled to articulate technical concepts around some of the other Salesforce Nonprofit Cloud modules. There were legitimate technical concerns. That’s fine. That’s exactly what technical interviews are for. You ask hard questions, you see where someone lands.

    The client saw enough potential to advance him to a third round. They wanted to give him another shot. Good. That’s how it should work.

    Then it all fell apart.

    In the third round, the hiring manager asked him to share an example of difficult feedback he had received. Standard behavioral question. The candidate gave an honest, specific answer: his recruiter had told him that he tends to speak too much in interviews, and that this feedback was challenging to hear.

    The hiring manager flagged this as a “red flag.” The interpretation? A “potential lack of receptiveness to constructive feedback.”

    I got this feedback secondhand through an HR contact at the client company. I was not in the interview room. But when I heard how the answer was interpreted, I knew exactly what had gone wrong.

    And it wasn’t the candidate.

    The Pizza Analogy

    Imagine you walk into a pizzeria. You see the master pizza maker behind the counter, tossing dough in the air with decades of skill. Beautiful technique. The crust is going to be perfect.

    Then he walks away and leaves a six-year-old in charge of the oven.

    The pizza comes out raw. Or burnt. Nobody’s happy.

    That’s what companies do when they hand behavioral assessment to untrained hiring managers. The dough got tossed beautifully. Your recruiters sourced the talent, your job postings attracted the right people, your screening process narrowed the field. Then it all got destroyed in the oven.

    You can have the best sourcing, the best employer brand, the best ATS in the world. None of it matters if the person making the final behavioral call doesn’t know what they’re looking at.

    The Numbers Are Embarrassing

    This isn’t opinion. The data is brutal.

    Research cited by HireVue found that hiring managers get their candidate evaluations correct as few as 20% of the time. One in five. You’d get better odds flipping a coin. If your sales team closed at a 20% accuracy rate, you’d fire every single one of them.

    Google analyzed tens of thousands of interviews and found “zero relationship” between interviewer scores and on-the-job performance. Laszlo Bock, Google’s former SVP of People Operations, called traditional interviews “a complete random mess.”

    Schmidt and Hunter’s landmark 1998 meta-analysis looked at 85 years of research data. Unstructured interviews predict only 14% of job performance. Structured interviews predict 26%. That’s nearly double. And Schmidt and Zimmerman’s 2004 follow-up found it takes three to four untrained interviewers to match the predictive validity of a single structured interview conducted by a trained assessor.

    And here’s the punchline: within 18 months, over 50% of new hires fail. Half. Of. All. Hires.

    This is not a marginal problem. This is a systemic failure. And it starts in the interview room.

    Your Brain Is Lying to You

    Why should hiring managers leave behavioral interviews to trained professionals?

    Because the human brain is terrible at objective evaluation. And unless you’ve been trained to recognize your own biases, you’re going to act on them without even knowing it.

    Research from the University of Toledo, led by Dr. Frank Bernieri and cited by Laszlo Bock, found that interviewers form judgments about candidates in the first 10 seconds of an interview. Then they spend the remaining 99.4% of the conversation in confirmation bias mode, looking for evidence to support the snap judgment they already made.

    Ten seconds. Before the candidate has finished saying hello, the decision is already forming.

    Here’s the lineup of biases working against you:

    Confirmation bias: seeking evidence that supports your initial impression while ignoring contradictory data.

    Attribution bias: assuming a candidate’s behavior reflects their character rather than the situation they’re in (more on this below).

    Similarity bias: favoring candidates who look, talk, or think like you.

    Anchoring bias: letting the first piece of information you receive disproportionately influence everything that follows.

    According to LinkedIn’s Global Recruiting Trends Report, 58% of recruiters acknowledged being prone to biases in their evaluations. These are people who do this for a living every day. If trained recruiters admit it, imagine what’s happening with hiring managers who conduct interviews a few times a quarter.

    They Asked for Difficult Feedback. Then Punished Him for Giving It.

    What is the Fundamental Attribution Error in hiring?

    The Fundamental Attribution Error is the tendency to attribute someone’s behavior to their character rather than to the situation they’re in. It’s one of the most well-documented cognitive biases in psychology. And research published in PLOS ONE found that experienced professionals consistently make this error when evaluating others, even when they should know better.

    This is exactly what happened to our candidate. Let me break down two devastating points.

    Point 1: They penalized him for answering their own question.

    They asked him to share an example of difficult feedback he had received. He gave them an honest, specific example. He said his recruiter told him he speaks too much in interviews, and that this feedback was challenging to hear.

    Then they flagged it as a red flag for “potential lack of receptiveness to constructive feedback.”

    Read that again. They asked for difficult feedback. He gave them difficult feedback. Then they penalized him because it was difficult to hear. That is literally what makes feedback difficult. That is what they asked for. If he had shrugged and said “oh, it was no big deal,” that would have been a weak, dishonest answer. He was honest. He was specific. He was vulnerable. They punished him for it.

    Point 2: The proof of coachability was sitting right in front of them. They never looked for it.

    He told them he had been coached to give shorter answers in interviews. That is a gift-wrapped opportunity for any trained behavioral assessor. The obvious follow-up question is: “Did he? Were his responses in this interview concise? Was he rambling, or had he actually incorporated the feedback?”

    They never asked. They never evaluated the one data point that would have proven or disproven coachability in real time. The evidence was in the room. A trained behavioral assessor would have caught it immediately. They missed it entirely.

    When I raised concerns about the interpretation with the HR contact, the response was telling. “We don’t record or transcribe interviews.” And: “Interview questions and the coaching of managers are internal processes managed by executive leadership.”

    Translation: there’s no recording to review, no transcript to evaluate, and questioning the process is off-limits.

    That’s not a hiring process. That’s a black box.

    My take: recording and transcription is baseline for evaluation and improvement, particularly when you’re asking people to do something outside their core function. If you’re trusting managers to make behavioral assessments, you need to verify those assessments are sound. Without recordings, without transcripts, without review, you’re flying blind and calling it strategy.

    You’re Not Just Misreading Candidates. You’re Looking for People Who Don’t Exist.

    This wasn’t just a behavioral interpretation failure. The technical evaluation had its own problems.

    The candidate demonstrated solid knowledge of Salesforce Nonprofit Cloud fundraising. That’s the most mature and widely adopted module in the NPC suite. But he was dinged for not demonstrating deep expertise in Grantmaking, Program Management, and Volunteers.

    Here’s the problem with that expectation. According to current market data on the NPC ecosystem:

    Program Management and Outcomes: only 10-20% of NPC professionals have expert-level depth. The module is growing fast but multi-implementation specialists are still rare.

    Grantmaking: 5-10%. The product is roughly two years old. Deep experience is concentrated in a small group of implementation partners.

    Volunteers: 5-10%. Volunteer Management only arrived in NPC in the last couple of releases. Expert-level experience is still emerging.

    The Nonprofit Cloud itself only launched in 2023. The NPC certification is brand new. Expecting a candidate to have deep expertise across all of these modules is not a high standard. It’s a fantasy. You’re fishing in a puddle and blaming the fish.

    This is not unique to this client. It’s an ecosystem-wide pattern.

    The 10K Advisors 2025 Salesforce Talent Ecosystem Report found that global talent supply grew 27% year over year, but the growth is heavily concentrated in generalist admin and business analyst roles. Specialist roles remain severely undersupplied. Technical architects make up just 1% of global supply, with demand up 27% but supply growing only 4%.

    Meanwhile, job descriptions keep asking for unicorns. One analysis noted that many Salesforce job postings ask for an “Admin + Developer + Architect hybrid,” and that top-tier candidates view this as a red flag.

    And 87% of Salesforce professionals report that the job market has become more challenging, with postings now requiring “a close match on multiple certifications, hands-on product experience, and often industry-specific knowledge” just to make it past screening.

    Here’s what’s actually happening: companies are posting for the top 5% of a niche specialty, rejecting the top 20% who could do the job with minimal ramp, and then wondering why they can’t fill the role. They found someone with real NPC experience since its inception, solid fundraising knowledge, and enough promise that they advanced him to a third round. Then they disqualified him on modules that barely have an expert community yet, and misread his behavioral signals on top of it.

    If you’re a CEO or VP reading this: your team may be doing this right now. Not out of malice. Out of a lack of calibration between what they’re asking for and what actually exists in the talent market.

    What Should Actually Happen

    Behavioral assessment belongs in the hands of trained professionals. I’m talking about I/O psychologists, certified behavioral assessors, or professional recruiters with formal behavioral assessment training. These are people who understand the difference between a signal and noise, between a data point and a projection.

    Hiring managers should evaluate technical competence, team fit, and job-specific knowledge. That’s their lane. They know their product, their stack, their team dynamics. That expertise is irreplaceable. But behavioral interpretation is not their lane. It requires a completely different skill set, and getting it wrong has consequences.

    Here’s what a real behavioral assessment process looks like:

    Structured questions designed around validated competency models

    Consistent evaluation rubrics applied to every candidate

    Trained assessors who understand attribution, anchoring, and confirmation bias

    Separation of behavioral assessment from technical evaluation

    Calibration sessions to ensure inter-rater reliability

    Recording and transcription as baseline for review and improvement

    None of this is radical. This is the standard in organizational psychology. The problem is that most companies have never adopted it.

    And on the unicorn problem: calibrate your requirements against what actually exists in the market. Talk to your recruiting partners. Understand which skills are abundant and which are emerging. Build your job descriptions around what a strong, trainable candidate looks like, not a fictional composite of every skill you wish you could find in one person.

    The Cost of Getting This Wrong

    How much does a bad hire actually cost a company?

    If the human argument doesn’t move you, let’s talk money.

    The U.S. Department of Labor estimates the cost of a bad hire at up to 30% of the employee’s first-year salary. SHRM puts that figure between 50% and 200%. CareerBuilder’s 2024 data shows that 75% of employers admit to making bad hires, with the average mistake costing $17,000 per position and executive-level bad hires exceeding $240,000.

    And here’s the kicker: organizations without a standard interview process are five times more likely to make bad hires. Five times. That is the cost of winging it. That is the cost of trusting gut instinct over structured methodology. That is the cost of leaving behavioral assessment to someone who does it a few times a quarter with zero training.

    But here’s the part nobody talks about: the cost of the good hire you didn’t make. The candidate who would have been trainable, dedicated, and high-performing, but got rejected because an untrained interviewer misread the behavioral signals. You don’t see that cost on a spreadsheet. But it shows up in the team that stays understaffed for another three months. In the projects that slip. In the top performer who burns out covering the gap. That cost is invisible, but it’s real.

    You wouldn’t let an untrained employee run your financial audit. You wouldn’t let a random team member write your legal contracts. So why are you letting untrained managers make the most consequential talent decisions in your organization?

    Stop Guessing. Start Getting This Right.

    Remember the pizza analogy? You’ve got master pizza makers building beautiful dough. Your sourcing is solid. Your employer brand is strong. Your pipeline is full.

    Then you hand the oven to someone who’s never been trained to use it.

    The pizza comes out wrong. The candidate gets rejected. Or worse, the wrong candidate gets hired. Either way, everyone loses.

    Stop putting hiring managers in a position to fail. Give them the technical evaluation where they excel, and bring in trained professionals for behavioral assessment. It’s not about taking power away from managers. It’s about setting them up to succeed in their actual area of expertise.

    And stop posting for unicorns. Define what a strong, trainable candidate actually looks like. Calibrate your requirements against the real talent market, not a wish list. The companies that figure this out will build better teams faster. The ones that don’t will keep losing good people to bad process.

    The data is clear. The cost is real. The solution is available. The only question is whether you’re going to keep doing it the old way and hoping for different results.

    If you want to talk about how to fix your hiring process, reach out at TheSalesforceRecruiter.com. And if you want more on hiring strategy, Salesforce careers, and building high-performing teams, check out The Hiring Edge podcast.

    Sources & References

    1. HireVue / industry research: Hiring managers make correct candidate evaluations as few as 20% of the time.
    2. Bock, L. (2015). Work Rules! Google’s analysis of tens of thousands of interviews found “zero relationship” between interviewer scores and on-the-job performance. Laszlo Bock, former SVP of People Operations, Google.
    3. Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262–274. (Unstructured interviews predict 14% of job performance; structured interviews predict 26%.)
    4. Schmidt, F. L., & Zimmerman, R. D. (2004). A counterintuitive hypothesis about employment interview validity and some supporting evidence. Journal of Applied Psychology, 89(3), 553–561. (Three to four untrained interviewers needed to match one trained structured assessor.)
    5. Bernieri, F. J., & Petty, K. N. (2011). The influence of handshakes on first impression accuracy. Social Influence, 6(2), 78–87. University of Toledo research on snap judgments within 10 seconds of an interview, cited by Laszlo Bock.
    6. LinkedIn Talent Solutions. Global Recruiting Trends Report. (58% of recruiters acknowledge being prone to unconscious bias in evaluations.)
    7. PLOS ONE: Research on experienced professionals consistently making the Fundamental Attribution Error when evaluating others.
    8. U.S. Department of Labor: Cost of a bad hire estimated at up to 30% of the employee’s first-year salary.
    9. Society for Human Resource Management (SHRM): Cost of a bad hire estimated between 50% and 200% of annual salary.
    10. CareerBuilder (2024): 75% of employers admit to making bad hires; average cost per mistake is $17,000; executive-level bad hires exceed $240,000.
    11. CareerBuilder (2024): Organizations without a standard interview process are five times more likely to make bad hires.
    12. 10K Advisors. (2025). Salesforce Talent Ecosystem Report. Global Salesforce talent supply grew 27% YoY; technical architects represent 1% of supply with demand up 27% but supply growing only 4%.