AI Recruiting Privacy and Data Security
Privacy and data security on AI recruiting platforms involves candidate data, regulatory compliance, and vendor processing. The risks and controls.
Privacy and data security on AI recruiting platforms is the procurement question that buyers most often defer until late and most often regret deferring. Candidate data is sensitive: personal identifiers, employment history, in some cases voice recordings and screening transcripts. Vendor practices vary widely. The teams that handle this well treat it as a first-class evaluation criterion, not a checkbox at the end.
The four categories of risk
1. Candidate data exposure
The most direct risk: the vendor mishandles candidate data, exposing it to unauthorised access. Mitigations are the standard SaaS security posture (SOC 2 Type II, ISO 27001, encryption at rest and in transit), plus role-based access control, SSO, and audit logging on your side. Confirm during evaluation; do not assume.
2. Regulatory non-compliance
GDPR for EU candidates, EU AI Act for any AI-driven decisioning in EU hiring, NYC AEDT for NYC-resident candidates, similar frameworks emerging in California, Illinois, Colorado. The vendor needs a posture that supports your compliance work; you need processes that operate it. Both are required.
3. Vendor model training on your data
A real and increasingly important risk. Some vendors use customer data to train their shared models by default. This means a competing customer could indirectly benefit from patterns learned on your hiring data. Read the contract carefully and demand an opt-out (or opt-in) at minimum.
4. Voice recording and biometric data
Voice screening records candidate audio, which in many jurisdictions is biometric data with stricter handling rules. Confirm: how long is the recording retained, where is it processed, is it used for any purpose other than the screening itself.
Data residency questions
Where is candidate data stored: US, EU, both, regional? Some buyers (especially in regulated industries or in regions with data localisation laws) need EU-only storage and processing. Confirm in writing; do not accept “our infrastructure is global.”
Retention and deletion
- Candidate data retention policy: how long is it kept
- Right-to-be-forgotten flows: how does the vendor handle GDPR delete requests
- Backup retention: backups should also be subject to deletion
- Audit log retention: enough to support regulatory or candidate appeal review (typically 12 to 24 months)
Access controls
- SSO and SCIM included in base plan, not as a premium upcharge
- Role-based access with role definitions you can configure
- Audit log on every privileged action, exportable
- Customer-controlled IP allowlisting available for high-security buyers
Privacy is a procurement decision, not a deployment afterthought. The clauses that matter most (data residency, model training, retention, deletion) all need to be in the contract before signing.
What good vendors do
- SOC 2 Type II and ISO 27001 reports available on request
- Data residency options including EU-only
- Opt-out from model training as a default, not as a paid premium
- Clear retention policies in writing, with delete capabilities exposed in the UI
- Per-jurisdiction compliance documentation (GDPR, EU AI Act, NYC AEDT)
- Subprocessor list maintained and customer-notified on changes
What to flag during evaluation
- “We use your data to improve our models” without an opt-out
- Data residency is “US-based” with no EU option, when you have EU candidates
- Retention is “at our discretion” or “up to 7 years” without justification
- SSO, audit log, or SCIM are paid upgrades rather than baseline
- The vendor cannot produce SOC 2 Type II report on request
The 2026 regulatory direction
Disclosure to candidates that AI is part of the screening process is becoming standard, increasingly mandatory. Right-to-explanation (the candidate can ask why they were ranked or rejected as they were) is also moving in this direction. Build the disclosure and explanation into your candidate communications now; it is much easier than retrofitting under regulatory pressure.
For the bias mechanics, see does AI recruiting software reduce hiring bias. For the security checklist, see AI recruiting software security requirements.
Quick answers
- What candidate data does AI recruiting software actually collect?
- Resumes, profile attributes, interview transcripts (if voice/video), email and call logs, and behavioral signals (response time, click-through). Plus inferred attributes the AI generates from these inputs.
- Are candidates told their data is processed by AI?
- GDPR, EU AI Act, and several state laws now require it. Disclose in your privacy notice and at the point of application. Vendors should provide template language.
- Who owns inferred AI scores about a candidate?
- The customer typically owns scores, but vendors often retain rights to use them for model improvement. Negotiate this explicitly and get clarity on deletion of derived data.