EU AI Act for HR: which tools become ‘high-risk’ and what inbound employers must do

The EU AI Act is now the world’s first comprehensive AI law. It directly reshapes how employers screen, assess, and promote staff across Europe. Moreover, recruitment tools face phased obligations through 2026 and 2027. Therefore, inbound employers hiring in the Netherlands must act early. In short, this AI regulation affects every AI-powered ATS, performance tracker, or promotion model you deploy. Below, you will learn what counts as high-risk, which timelines apply, and which steps protect your organisation. Understanding the link between AI and law has never mattered more for HR leaders.

What is the EU AI Act?

The EU AI Act is the European AI Act that sets uniform rules for AI systems placed on the EU market. It ranks tools by risk level and imposes duties on both providers and deployers. Consequently, this AI law applies directly to HR teams using recruitment or performance software. It entered into force on 1 August 2024.

Furthermore, the Act splits systems into four categories: unacceptable, high-risk, limited, and minimal risk. Each tier carries different duties. Because regulating AI now sits inside EU law, breaches risk fines up to €35 million or 7% of global turnover. Firms must therefore map every AI tool currently in use. Early mapping reveals exposure well before enforcement truly ramps up.

Why HR and recruitment sit in the high-risk category in the EU AI Act

HR systems decide who gets hired, promoted, or dismissed. As a result, Annex III of the EU AI Act lists employment-related AI as high-risk. This covers screening, task allocation, evaluation, and promotion decisions. Therefore, most recruitment and performance tools land inside scope, regardless of what a vendor claims on its marketing page.

Additionally, the link between AI and law tightens when automated decisions affect pay, dismissal, or contract renewal. Providers must run conformity assessments and register systems in the EU database. Meanwhile, deployers such as employers carry separate duties on oversight, logging, and transparency. Both sides share responsibility under this AI regulation, and liability can shift when deployers override built-in safeguards.

Which HR tools count as high-risk AI systems?

Tools that evaluate, rank, or filter people in employment contexts qualify as high-risk under the EU AI Act. This covers CV parsers, video interview scorers, productivity monitors, and promotion algorithms. However, simple scheduling or payroll calculators usually stay out of scope. Always verify vendor documentation before deployment.

Tool typeTypical useRisk tier under the AI act
CV screening and ATS rankingFilters applicantsHigh-risk
Video interview analysisScores candidatesHigh-risk
Performance analyticsTracks outputHigh-risk
Promotion decision enginesRanks staffHigh-risk
Shift schedulingPlans rostersLimited or minimal
Payroll automationCalculates salaryLimited or minimal

In practice, any tool that meaningfully influences hiring or firing likely counts as high-risk.

The phased timeline: what employers face through 2026-27

The EU AI Act rolls out in clear stages. First, bans on unacceptable practices applied from February 2025. Next, general-purpose AI rules started in August 2025. Most high-risk HR obligations then apply from 2 August 2026. Finally, full rules arrive on 2 August 2027. Therefore, planning should start immediately.

Meanwhile, deployers must train staff on AI literacy, a duty already active since early 2025. Moreover, transparency rules for emotion recognition and biometric tools take effect alongside the high-risk regime. Early preparation avoids last-minute supplier swaps. Budget cycles should include compliance costs now. Supplier contracts signed in 2025 should already reflect future duties under this AI regulation.

What must inbound employers do now?

Inbound employers should first inventory every AI system touching HR data. Next, classify each tool by risk tier. After that, request EU conformity evidence from vendors. Finally, set up human oversight, logging, and a clear complaints route for affected staff. These four steps cover the core deployer duties under the European AI Act.

Practical actions include:

  • Map your AI stack: list every ATS, assessment, or monitoring tool.
  • Demand vendor documentation: CE marking, technical files, and instructions for use.
  • Train HR staff: AI literacy is now a legal duty under the AI act.
  • Enable human review: no hire-or-fire decision may run fully automated.
  • Update privacy notices: candidates must know AI assists the process.
  • Log outcomes and keep records ready for auditors or regulators.
  • Brief works councils early: Dutch law also gives them consultation rights on HR monitoring.

Above all, document everything. Regulators will ask for proof, not promises.

How Octagon supports compliant hiring

Octagon Professionals’ recruitment and EOR services in the Netherlands combine local compliance expertise with transparent, human-led hiring. Because we support employers expanding into the EU, our process already aligns with the direction of this AI law. Therefore, inbound clients gain a clear route to compliant hiring without rebuilding their entire tech stack. Reach out to discuss your current AI tools and your next steps.

Frequently asked questions

Does the EU AI Act apply to non-EU employers?

Yes. The AI law applies to any employer placing AI outputs on the EU market or using AI to affect EU-based workers. Therefore, a US or UK company hiring in the Netherlands clearly falls in scope. The location of the vendor or head office does not remove this duty.

When do high-risk HR rules actually take effect?

Most high-risk obligations under the European AI Act start on 2 August 2026. However, bans on prohibited practices and AI literacy duties already apply today. Full rules, including those for embedded AI in regulated products, arrive on 2 August 2027. Early planning prevents painful compliance gaps.

Can we keep using our current AI-based ATS?

Possibly, but only if the vendor proves conformity. Request the CE marking, technical documentation, and human-oversight guidance. If the provider cannot supply these, your firm carries the risk. Switch suppliers or add manual steps before the 2026 deadline to stay compliant.

What are the fines for breaching the AI act?

Breaches of this AI regulation can reach €35 million or 7% of global annual turnover, whichever is higher. Lower tiers apply to transparency and documentation failures. National authorities, such as the Dutch AP, will enforce the rules. Compliance costs usually stay far below your potential penalty exposure.

Similar Posts