Back to Blog
Recruiter Tools

EU AI Act Compliance for Recruiters: What You Need to Know

The EU AI Act is in force and most HR teams aren’t ready. Here’s what high-risk means for CV screening tools and the 3 questions to ask every vendor.

Vitae Team
7 min read

The EU AI Act came into force. And based on what we’re seeing in HR forums right now — most teams are not prepared.

Specifically for AI tools used in hiring. Which includes CV screening, ranking, and parsing.

This article breaks down what the EU AI Act means for recruiters and staffing agencies, what “high-risk” actually requires in practice, and the questions you should be asking every vendor in your HR tech stack.

What the EU AI Act classifies as high-risk

The EU AI Act creates a risk-based classification system for AI tools. At the top of that system sits the high-risk category, and employment is one of the explicitly listed domains.

Automated systems that make — or significantly influence — decisions about employment, promotion, or worker management are classified as high-risk. CV screening tools, ATS ranking systems, and interview analysis tools all fall in this zone.

The key word is significantly influence. You don’t need to fully automate a hiring decision to trigger the classification. If an AI system produces a ranking, a score, or a recommendation that a recruiter relies on to advance or reject candidates, that system is likely high-risk under the Act.

What “high-risk” actually requires

Being classified as high-risk isn’t a ban. It’s a set of obligations. Here’s what it means in practical terms:

  • Transparency — candidates must be informed if AI was used to evaluate them. This isn’t optional, and “it’s in the privacy policy” probably isn’t sufficient. The notification needs to be clear and specific.
  • Human oversight — a human must be able to review and override automated decisions. If your tool produces a score or ranking and that output goes directly to a client without human review, you have an oversight gap.
  • Accuracy and fairness requirements — you need to be able to audit your system’s outputs. Can you explain why candidate A ranked higher than candidate B? If not, you have a potential compliance issue.
  • Data governance — training data must be documented and bias-tested. If your vendor can’t tell you what data their model was trained on or how they test for bias, that’s a red flag.

What this means practically

If you’re using a black-box AI screening tool bought off a shelf — and you can’t explain what it’s doing or audit its outputs — you are potentially non-compliant. The enforcement risk increases as the Act’s timeline progresses through 2025–2026.

This isn’t theoretical. The EU AI Act applies to any organisation deploying AI systems within the EU, regardless of where the vendor is headquartered. If you’re a Belgian recruitment agency using a US-built AI screening tool, you’re still responsible for compliance. The obligation falls on the deployer, not just the provider.

For most recruitment agencies, this means reviewing your current tech stack and asking hard questions about every tool that touches candidate evaluation. Not next year. Now.

The three questions every vendor should answer

If you’re evaluating your HR tech stack for EU AI Act compliance, there are three questions worth asking each vendor:

  1. Is your system classified as high-risk under the EU AI Act? If the vendor can’t answer this clearly, that’s already a problem. Any tool involved in candidate screening, ranking, or recommendation is likely in scope.
  2. How do you ensure human oversight of automated outputs? Look for specifics here. “Humans are in the loop” is vague. You want to know: at what point can a human review and override the system’s decision? Is there an audit trail?
  3. Can you provide documentation on how your model makes decisions? This covers both transparency to candidates and your own ability to audit the system. If the answer involves the words “proprietary” or “trade secret”, that’s a compliance risk, not a feature.

Where vitae.build fits

We built Vitae in Belgium, designed from the ground up to operate transparently within the EU regulatory environment. Here’s how we think about compliance:

We structure and parse CV data — we don’t make hiring decisions. Vitae extracts information from candidate CVs and formats it into standardised, branded documents. The output is a human-readable, auditable PDF. There is no hidden scoring, no candidate ranking, no recommendation engine deciding who gets an interview.

The recruiter stays in the loop. Always. Every CV that comes out of Vitae is reviewed by a human before it goes to a client. The tool produces the document; the recruiter makes the decision. This is by design, not by accident.

Our outputs are fully auditable. When Vitae parses a CV, you can see exactly what data was extracted and how it maps to the final document. There’s no black box. If a candidate asks what information was used, you can show them.

This doesn’t make Vitae the answer to every EU AI Act compliance question. But it does mean the tool you use for CV formatting shouldn’t be adding compliance risk to your stack.

What you should do now

The EU AI Act timeline is progressing. Waiting for “clearer guidance” is a strategy, but not a good one. Here’s a practical starting point:

  1. Audit your current HR tech stack. List every tool that touches candidate data in any evaluative way. This includes ATS systems, screening tools, interview platforms, and CV parsers.
  2. Ask the three questions above to each vendor. Document the responses. If a vendor can’t or won’t answer, note that too — it’s useful information.
  3. Review your candidate-facing disclosures. Are you informing candidates when AI is used in their evaluation? The transparency requirement is one of the most straightforward obligations to get right.
  4. Check your human oversight processes. Where in your workflow does a human review automated outputs before they affect a candidate? If the answer is “nowhere” or “at the end”, consider adding review checkpoints.
  5. Start documenting. Even if enforcement is still ramping up, having documentation of your compliance efforts shows good faith and prepares you for when audits do happen.

The bigger picture

The EU AI Act isn’t going to make recruitment harder. If anything, it pushes the industry toward practices that should have been standard all along: transparency about what tools you use, oversight of automated decisions, and accountability for outcomes.

For agencies that already value the human element in recruitment — that see technology as a way to handle the administrative work so recruiters can focus on relationships and judgment — this is actually good news. The regulation favours tools that augment human decision-making over tools that replace it.

That’s the philosophy we’ve built Vitae around. The technology handles the formatting, the structuring, the consistency. The recruiter handles the thinking.

Try Vitae for free

Automate your CV formatting workflow. Turn messy candidate resumes into branded, client-ready profiles in seconds.