How AI Cut Time-to-Hire by 64% for a Tech Recruiting Firm
A case study on the AI sourcing and screening system we built for a tech recruiting firm that doubled placements and cut time-to-hire by 64%.
A 22-person tech recruiting firm came to us last fall with a simple problem and an ugly spreadsheet. They were placing 31 candidates a quarter, their average time-to-hire was 47 days, and their recruiters were each reviewing 600 to 800 resumes a week. Within 10 weeks, we shipped an AI sourcing and screening system that cut time-to-hire by 64%, more than doubled placements per recruiter, and surfaced 3x more qualified candidates from the same pipeline. Here is how we scoped it, what we built, and the results 90 days in.
The problem: recruiters drowning in resumes, missing the best candidates
The firm specialized in placing senior backend and platform engineers at Series B and C startups. Their recruiters were sharp. Their client roster was strong. Their pipeline was the bottleneck.
The math the COO walked us through on the first call:
- 47 days average time-to-fill. Industry benchmark for senior engineering roles is 38 to 42 days. They were losing offers to faster competitors.
- 8% resume-to-shortlist rate. Recruiters were spending 60% of their week reading resumes that never made it past first-pass review.
- 22% shortlist-to-offer rate. Strong on paper. But shortlists were being built from the top of the inbox, not the top of the pipeline.
- 41% of placements came from referrals, meaning 59% came from cold sourcing that was barely instrumented.
The COO had been quoted $180K a year for a competing AI sourcing platform. He wanted to know if a custom system would actually move the metrics that mattered, or if it would be another tool that lived next to the four they already paid for.
How we scoped the project
Before writing any code, we ran our standard AI project scoping process. Three checks had to pass.
First, the workflow had to be high volume and pattern-driven. Senior backend roles share a recognizable shape: language stack, system design experience, scale signals, ownership history. That is exactly the kind of pattern matching AI is good at.
Second, the firm had to have a clear definition of "qualified." Without a written rubric, AI screening produces inconsistent results and recruiters lose trust in week two. Their head of recruiting had a 6-page internal doc grading candidates on six dimensions. Good enough.
Third, outputs had to land in tools recruiters already used. They lived in Greenhouse for ATS, Slack for team comms, and Gmail for candidate outreach. Anything that required a fifth tab would die on the vine.
What we built
The system runs across four stages, each with a recruiter-in-the-loop checkpoint. Nothing auto-rejects a candidate. Nothing auto-sends an email.
Stage 1: Sourcing expansion
Whenever a recruiter opens a new role in Greenhouse, the system parses the job description and generates a structured search profile: required stack, scale indicators, seniority signals, location filters, and disqualifiers. It then runs that profile across LinkedIn, GitHub, Stack Overflow, and three smaller engineering communities the firm had been ignoring.
The output is a ranked list of 200 to 400 prospects per role, deduplicated against the firm's 80,000-candidate ATS history. Every prospect comes with a one-paragraph fit summary and links to the signals that drove the score.
This stage alone 3x'd the size of the qualified top of funnel while cutting sourcing time per role from 11 hours to under 90 minutes.
Stage 2: Inbound resume screening
Inbound resumes from job boards and referrals flow through a screening pass that scores each candidate against the role rubric. Every candidate gets a structured scorecard with a green, yellow, or red verdict on each of the six dimensions, plus a one-line "why this passed or failed" note recruiters can sanity-check in seconds.
We used a two-model architecture here. A faster, cheaper model did first-pass scoring. A stronger reasoning model only ran on candidates that scored above a threshold or had ambiguous signals. That cut inference cost by about 58% versus running every resume through the top-tier model, with no measurable hit on the firm's blind eval set.
Stage 3: Outreach drafting
For prospects that cleared the bar, the system drafts a personalized first-touch email and InMail that references actual specifics from the candidate's profile. Open-source projects they ship to. Talks they have given. The scale of systems they have owned. Recruiters review and either send, edit and send, or skip with a reason.
The "skip with a reason" feedback feeds the learning loop in stage 4.
The numbers on outreach changed fast. Reply rate on cold outreach went from 11% to 27% in the first six weeks. The system was not writing better English than the recruiters. It was just writing personalized English at a volume the recruiters could not.
Stage 4: Learning loop
Every recruiter override becomes training data. When a recruiter shortlists a candidate the system marked yellow, or rejects one it scored green, that decision feeds a weekly review. Every Monday the head of recruiting reviews the deltas and decides whether the rubric needs an update or the scoring needs tuning.
After 10 weeks of live use, the rubric had been updated 14 times based on patterns the system surfaced. The AI made the rubric sharper. The sharper rubric made the AI better. Compounding loop.
The results after 90 days
We measured against the 90-day baseline collected before launch. The numbers:
- Average time-to-hire: 47 days → 17 days. A 64% reduction.
- Placements per recruiter per quarter: 1.4 → 3.2. More than 2x.
- Qualified candidates per role: 3.1x increase. Driven mostly by the sourcing expansion in stage 1.
- Recruiter hours on resume review: down 71%. Redeployed to candidate calls, client management, and offer negotiation.
- Cold outreach reply rate: 11% to 27%. Pipeline diversity improved alongside volume.
- Revenue per recruiter: up 118% year over year, on a roughly flat headcount.
The COO's quote at the 90-day review: "We were going to hire four more recruiters this year. Instead we hired one and shipped this. The math is not even close."
What made this work, and what usually kills projects like this
We have seen recruiting AI projects fail more often than they succeed. Four things made this one different.
We started with the rubric, not the model. The AI is only as good as the standard it scores against. Firms that skip rubric work first get inconsistent output and blame the model.
We kept recruiters in the loop on every decision. The system never auto-rejected a candidate or auto-sent a message. It drafted. Recruiters decided. That is the only architecture that survives in a function where reputation lives and dies on candidate experience.
We integrated where the work already lived. Greenhouse, Slack, Gmail. No new dashboard. Adoption was 100% in week two because nobody had to learn a new tool.
We measured cycle time and placements, not "AI accuracy." The business cared about days-to-fill and revenue per recruiter. Those were the metrics we optimized. Accuracy on a holdout set was an internal check, not the win condition.
Takeaways for any recruiting or talent leader considering this
A few things to steal, whether you work with us or not.
- Audit your recruiter time first. If more than half of a recruiter's week is reading resumes and writing first-touch outreach, AI screening will almost certainly pay back inside a quarter.
- Write the rubric before you buy the tool. A clear 6-dimension rubric is worth more than any vendor demo.
- Score the top of funnel, do not just filter it. The win is not rejecting bad candidates faster. It is surfacing the great ones you would have missed at 4 PM on a Friday.
- Budget for integration, not just the model. In our experience the AI is 30% of the build. The other 70% is getting outputs into Greenhouse, Slack, and Gmail in a way recruiters actually use.
If you run a recruiting firm or a talent function where smart people are spending half their week on resume triage and copy-paste outreach, the automation economics in 2026 are no longer a close call. The tooling is real. The workflows are mapped. The ROI shows up inside one quarter.
We have now built systems like this for recruiting, sales development, customer success, and operations teams across more than 1,000 projects. If you want to see what it would look like for your team, get started here and we will map it on a 30-minute call.
Share this article