Metrics to Track for AI Recruiting
The metrics that matter when measuring AI recruiting success: throughput, quality, fairness, cost, adoption. The numbers to report, by audience.
The metrics that matter when measuring AI recruiting are not new. They are the same metrics good recruiting has always tracked, with a few additions specific to AI. The mistake most teams make is reporting too many of them too often, which dilutes the signal. The discipline is to pick the right small set for each audience and report them consistently.
The five categories
1. Throughput
- Roles per recruiter per quarter (capacity)
- Time-to-fill, by role family (speed)
- Time-to-shortlist, by role family (sourcing speed specifically)
- Pipeline conversion rates at each stage
- Source-of-hire mix (inbound, sourced, agency, referral)
2. Quality
- Offer-acceptance rate, by role family
- First-90-day retention on AI-shortlisted hires
- Hiring-manager NPS or pulse satisfaction with shortlists
- Candidate NPS post-process
3. Fairness
- Applicant pool composition vs shortlist composition by category
- Pass-through rate at each stage by category
- Recruiter override rate, especially on non-traditional backgrounds
- Demographic distribution drift quarter-over-quarter
4. Cost
- Cost-per-hire, by role family
- Tooling spend per recruiter
- Agency leakage (% of roles paid to external agencies)
- AI usage cost (voice minutes, screening calls) against plan
5. Adoption
- % of roles running through AI flow vs manual
- Recruiter override rate (high overrides flag rubric problems)
- Bottom-decile sample audit cadence (must hold at least monthly)
- Hiring-manager engagement with AI-generated reports
Five categories, twenty metrics. Each audience cares about a subset of them. Reporting all twenty to all audiences is the fastest way to make the metrics meaningless.
What to report by audience
Recruiting team daily
Capacity utilisation, sourcing throughput, scheduled-vs-completed screens. The team-level view, used to triage the day.
Recruiting leadership weekly
Time-to-fill trend, override rate trend, pipeline distribution by role family. The operational view, used to spot problems early.
Hiring managers per role-family
Open roles, shortlist conversion, offer-acceptance, time-to-fill against SLA. The partnership view, used to keep alignment with the business.
Executive leadership quarterly
Cost-per-hire, agency leakage, total tooling spend, recruiter capacity. The strategic view, used to justify investment and steer priorities.
Compliance / legal
Demographic pipeline distribution, audit-log completeness, override patterns by category, AI disclosure compliance. The risk view, used to satisfy regulators and respond to candidate inquiries.
Common mistakes
- Reporting averages instead of medians; one outlier role distorts the average
- Mixing time-to-fill and time-to-hire definitions across reports
- Reading 14-day data as steady-state; the 90-day number is the budget benchmark
- Ignoring fairness metrics until a complaint arrives
- Tracking volume metrics without quality metrics; speed without retention is not a win
What to automate vs report manually
Throughput and cost metrics should automate; the data is structured and the cadence is high. Quality and fairness metrics often need recruiter or HM input that does not flow automatically. Adoption metrics live somewhere in between. Pick the right tooling for each, and do not pretend a single dashboard covers everything.
For reports specifically, see what reports and analytics AI recruiting tools provide. For the time-to-hire arc specifically, see how to measure time-to-hire with AI.