The Science of Leading

EducationBusiness

Listen

All Episodes

AI in Hiring: Balancing Efficiency and the Human Touch

How can organizations leverage AI in recruitment without trading away empathy and fairness? This episode dives into the research, risks, and real-world strategies for building a hiring process that is both high-tech and deeply human.

This show was created with Jellypod, the AI Podcast Studio. Create your own podcast with Jellypod today.

Get Started

Is this your podcast and want to remove this banner? Click here.


Chapter 1

Algorithmic Hiring: Opportunities and Risks

Claire Monroe

Welcome back to The Science of Leading, everyone. I’m Claire Monroe, and Edwin Carrington is here with me, as always. Today, we’re diving into something that’s... kind of everywhere right now—AI in hiring. And Edwin, I keep seeing two totally different takes on this. Some people are like, “Finally! A way to remove bias,” and others are just, like, deeply anxious that algorithms are making things worse. You’ve seen both sides play out, right?

Edwin Carrington

Absolutely. And there’s new research that captures that exact tension. One study—a multidisciplinary survey—really lays it out. There’s hope, real hope, that algorithmic hiring could reduce bias. The idea is, machines don’t bring gut instinct or unconscious prejudice into the process. In theory, it’s more objective.

Claire Monroe

Right... so, like, no more “this candidate just feels right” based on nothing? We’ve talked about that kind of bias before—like, hiring based on vibes.

Edwin Carrington

That’s the theory, yes. But it’s not the whole story. The same research shows a serious risk: if the data used to train these tools is biased—which, let’s be honest, it often is—then the algorithm doesn’t erase bias. It learns it. And then... it scales it. You’re not solving discrimination. You’re automating it.

Claire Monroe

Oof. Yeah, it’s like—bad data in, bad decisions out... but faster and everywhere. Why do you think we keep hearing these two—kind of competing—narratives? Like, “AI will save hiring,” vs “AI is just a bias machine”?

Edwin Carrington

It’s human nature to look for simple answers. We want to know: is AI good or bad? But this is one of those “it depends” situations. AI isn’t inherently fair—or unfair. It depends entirely on how it’s trained, deployed, and monitored. And that’s where hiring leaders really need to focus.

Claire Monroe

So, okay—given all the promise and the pitfalls, how do you think about it? Like, where do you land on this?

Edwin Carrington

I see massive potential—when it’s done right. AI can make hiring more efficient, more consistent, and in some cases, less biased. But we need clear guardrails. We can’t just assume the tech knows what “fair” looks like. If we don’t stay actively involved—checking, adjusting, asking questions—we risk doing real harm.

Claire Monroe

Yeah... it’s like we’re walking a tightrope. Promise on one side, danger on the other. And that sets us up perfectly for this next point, which I love—because it’s where the real nuance lives. Like, how do you actually blend AI’s strengths with the human side that makes hiring... you know, actually work?

Chapter 2

Striking the Human–Tech Balance in Recruitment

Edwin Carrington

That’s the key question. Most of the smart guidance out there—from Forbes HR Council to SocialTalent to the practitioners in the trenches—says the same thing: let AI handle the repetitive, time-consuming stuff. But keep the meaningful moments human.

Claire Monroe

Right, like—okay—scheduling interviews, screening CVs, maybe answering FAQs through a chatbot. That stuff can eat up a recruiter’s entire day. But if you automate it, you actually free up time for the parts that matter. Real convos. Real connection.

Edwin Carrington

Exactly. And here’s what’s important: research shows that what candidates want most is feedback, transparency, and personalization. That doesn’t come from a bot. Automation should create the space for those high-impact moments—not eliminate them.

Claire Monroe

Ugh—okay, story time. Early in my career, I worked somewhere that tried to automate, like, everything. No calls, no updates, just templated emails. It was efficient... but also kinda brutal. People started saying stuff like, “I felt invisible.” It was rough. So we added short check-in calls—just 5 minutes!—and the change in candidate feedback was unreal.

Edwin Carrington

That’s a perfect example. And it’s exactly what companies like Accenture and BCG are building into their models. They call it High Touch, High Tech. The tech helps with speed—but the touch is where the trust gets built. You map out the process so that every human moment delivers maximum impact.

Claire Monroe

And it’s not just touchy-feely stuff, either. That kind of attention drives better outcomes. Better referrals. Better offer acceptance. Like we said in our last episode—automation isn’t the enemy. But if you’re not intentional about what you automate, you’re gonna miss the mark.

Edwin Carrington

Exactly. If empathy gets lost in the name of efficiency, the long-term costs are real. But with the right design? You get both. Speed and experience.

Claire Monroe

Okay, but now comes the hard part—if you do scale up the tech, how do you keep things fair? And clear? Like... how do you stop it from turning into this mysterious black box?

Chapter 3

Ensuring Fairness, Empathy, and Trust in AI-Driven Hiring

Edwin Carrington

This is where most organizations get tripped up. It’s not enough to just say, “We use AI, but trust us.” Fairness, empathy, and transparency need to be built in—and checked for—constantly.

Claire Monroe

Okay, so like, what does that actually look like? What should teams be focusing on, day to day?

Edwin Carrington

Three big things: First, bias auditing. You have to test your tools to make sure they’re not replicating old patterns. The EU AI Act is really clear about this—it’s not optional. Second, transparency. Candidates should know when AI is being used, and how. If a hiring manager can’t explain it? That’s a red flag. And third—explainability.

Claire Monroe

That one’s huge. Because even if you’re not doing anything shady, it still feels shady when no one can explain what’s happening. And the privacy side too—like, if people feel like their data’s just... floating around? That’s a trust killer.

Edwin Carrington

Absolutely. Strong organizations—Accenture, BCG, others—they don’t leave that stuff to chance. They do bias checks. They communicate clearly. And they make sure there’s always a human involved when it counts most.

Claire Monroe

Yeah, but I feel like the temptation is real, though. Like, “Hey, this tool’s fast—let’s just roll with it.” But if no one knows how it works... you lose trust. Fast.

Edwin Carrington

Exactly. When the process becomes a black box, people pull away—candidates, hiring teams, regulators. I’ve seen it. The organizations that lead are the ones who stay transparent, who keep humans in the loop, and who treat experience as a strategic asset—not just a checkbox.

Claire Monroe

Yeah, because let’s be real—you can have the flashiest tech stack in the world, but if people don’t trust the process? It’s game over. Same old problems, just in a shinier container.

Edwin Carrington

Exactly right. AI should amplify what people do best—not erase it. The companies who get this right don’t just use AI. They design with intention. They lead with integrity.

Claire Monroe

Okay, that feels like a solid place to land. If you’re wondering how to actually put some of this into action—like, practically—OAD has free tools you can try. Behavioral assessments, bias-reducing strategies... all of it. Just head to o-a-d dot a-i. It’s simple, and it really helps you get clearer on fit.

Edwin Carrington

It’s been great as always, Claire. And thank you to everyone listening. Just by being curious about this stuff—you’re already moving toward better decisions.

Claire Monroe

We’ll be back soon with more on the science—and the humanity—behind smarter hiring. See you next time on The Science of Leading.

Edwin Carrington

Take care, Claire. Take care, everyone.