{}
Get the full picture before you hire globally. Salaries, taxes, contributions, the lot. → Try our free calculator

EU AI Act EOR Liability: Who's Responsible in 2026?

Compliance
This article is for informational purposes only and does not constitute legal, tax, or compliance advice. Always consult a qualified professional before acting on any information provided.

Who's liable under the EU AI Act if I hire through an EOR in 2026?

Last updated: 22nd April 2026

The EU AI Act is now in force, and if you're employing people in the EU through an Employer of Record, you're probably wondering who carries the compliance burden. Your EOR is the legal employer on paper. But you're the one deciding whether to use AI tools in hiring, performance reviews, and workforce decisions. So when the regulator comes knocking, whose door do they knock on?

The honest answer is that liability doesn't sit neatly with one party. Under Regulation (EU) 2024/1689, obligations attach to specific roles, and in an EOR arrangement, those roles can be split across multiple organisations. The company that deploys an AI system for employment decisions is the deployer under the Act, regardless of who signs the employment contract. That distinction matters more than most EOR contracts currently acknowledge.

Below, we walk through how liability actually splits between you and your EOR, where your contract is likely leaving gaps, and what to do differently when you're operating across several EU countries with layered local rules.

What actually changes in an EOR setup

The Act treats AI used for recruitment, selection, promotion, termination, and performance monitoring as high-risk.

In practice, that means much of the HR tooling you already use sits in the heavily regulated tier, not the lighter transparency-only tier—with 79% of European firms now using algorithmic management tools to instruct, monitor or evaluate employees.

A deployer is the party using an AI system under its own authority to make employment decisions. In a typical EOR arrangement, that's the client company choosing who to hire or promote. The EOR, running payroll and managing local employment compliance, usually isn't.

Fines are calculated as the higher of a fixed euro figure or up to 7% of worldwide annual turnover. Your whole group's revenue, not just the EU subsidiary's, is what sits on the line.

The Act applies to AI systems used on EU-based workers even when the tool is bought and managed from the UK or the US. That's the pattern we see most often with companies hiring through an EOR.

On pricing, our EOR fee is $599 per employee per month. Salary and statutory employer costs show up as separate lines on your invoice rather than being blended into a percentage of payroll. The same honesty we want on your invoices is the honesty we expect in the AI schedule of your contract.

Indemnities between you and your EOR can help you recover money after something goes wrong. They don't change who the regulator writes to first.

Who is the "deployer" under the EU AI Act?

The deployer is the natural or legal person that uses an AI system under its authority. In employment contexts, that's the party making or materially influencing hiring, promotion, termination, or performance decisions using AI outputs. The key word is authority. Whoever authorises the AI system's use and acts on its recommendations is the deployer.

Here's where EOR arrangements get complicated. Your EOR is the legal employer of your EU-based workers. They run payroll, handle statutory benefits, and manage local employment compliance. But they're not typically the party deciding whether to use an AI screening tool to filter candidates or an AI performance system to flag underperformers.

You are. When you procure an AI recruitment tool, configure its parameters, and use its outputs to decide who gets hired or promoted, you're the deployer under the EU AI Act. The fact that the EOR signs the employment contract doesn't change that. The Act follows the decision-making authority, not the employment paperwork.

This creates a practical split that most EOR contracts don't address. The EOR handles employment law compliance. You handle AI deployment compliance. And if your contract doesn't explicitly allocate these responsibilities, both parties may assume the other is covering it.

Where does EOR liability start and stop?

The EOR's liability centres on their role as legal employer. They're responsible for compliant employment contracts, correct payroll calculations, statutory benefits administration, and local labour law adherence. If German works council consultation requirements apply, the EOR handles that. If French termination procedures require specific documentation, the EOR manages it.

But the EU AI Act creates a separate compliance layer that sits alongside employment law. When you use AI for employment-related decisions affecting EU workers, you trigger deployer obligations regardless of the EOR relationship. These include implementing appropriate human oversight, ensuring the AI system is used according to the provider's instructions, and maintaining logs of the system's operation.

The EOR doesn't automatically inherit these obligations just because they're the legal employer. The Act assigns duties based on who controls and uses the AI system, not who employs the worker. If you're the one deciding to deploy an AI tool and acting on its outputs, you're the one with deployer obligations.

That said, the lines can blur. If your EOR provides an AI-enabled HR platform as part of their service and uses it to make employment decisions on your behalf, they may become the deployer for those specific functions. The analysis depends on who has authority over the AI system's use in each workflow.

What about tools the EOR provides?

This is where mid-market companies often get caught out. Many EOR platforms now include AI-powered features for candidate screening, onboarding automation, or performance analytics. When your EOR provides these tools and uses them in their capacity as legal employer, they may be the deployer for those specific functions.

But here's the catch. If you're the one configuring the tool, reviewing its outputs, and making final decisions based on AI recommendations, you may still be the deployer even though the EOR owns the platform. The EU AI Act looks at who exercises authority over the AI system's use, not who holds the software licence.

Consider a scenario where your EOR provides an AI screening tool that ranks candidates. If the EOR runs the tool and presents you with a shortlist, they're likely the deployer. But if you access the tool directly, set the screening criteria, and use the rankings to make hiring decisions, you're likely the deployer despite using the EOR's platform.

This distinction matters because deployer obligations include implementing human oversight measures, monitoring the system's operation, and suspending use if you identify risks. You can't delegate these obligations to your EOR through contract alone if you're the party actually deploying the system.

What should the EOR contract say?

Your EOR contract needs an AI responsibilities schedule that explicitly allocates EU AI Act duties between you and your EOR. Most current EOR agreements were drafted before the Act came into force and don't address AI compliance at all. That's a gap you need to close.

The schedule should specify which party is responsible for each AI-enabled HR workflow. For any AI system the EOR provides, the contract should clarify whether the EOR or the client is the deployer based on who has authority over the system's use. For AI systems you procure independently, the contract should confirm you retain deployer obligations.

Key provisions to include are documentation requirements, specifying who maintains records of AI system use and makes them available for regulatory inspection. Human oversight allocation matters too, clarifying who implements and monitors the required human review of AI outputs. Notification procedures should establish how each party informs the other of AI-related compliance concerns or incidents.

The contract should also address what happens when the EOR's local employment obligations intersect with your AI deployment obligations. In Germany, for example, works council consultation may be required before implementing AI monitoring tools. Your contract should clarify whether the EOR handles this consultation as part of their employment compliance role or whether you need to coordinate directly.

Teamed's approach is to include AI responsibilities as a standard schedule in every EOR master service agreement, with country-specific addenda that address local labour law requirements affecting AI deployment. Based on Teamed's work with mid-market companies across multiple EU jurisdictions, this structure prevents the assumption gaps that leave both parties exposed.

What if I use an EOR across multiple jurisdictions?

Multi-jurisdiction EOR arrangements multiply the complexity. The EU AI Act provides a baseline, but individual member states can layer additional requirements on top. German works council rights, French employee consultation obligations, and Dutch data protection interpretations all affect how you can deploy AI in employment contexts.

The challenge is that the same AI-enabled hiring process may trigger different local requirements in each country where you have EOR-employed workers. Your German team may require works council consultation before you implement an AI screening tool. Your French team may need individual employee notification. Your Dutch team may face stricter limits on automated decision-making under local GDPR interpretation.

A single workflow, many countries problem emerges. You need a governance model that standardises your AI-enabled hiring approach while accommodating country-specific constraints. That means documenting your AI deployment policies at the group level, then creating country-specific implementation guides that address local requirements.

Your EOR can help with the local employment law layer, but they can't manage your AI governance for you. Teamed's analysis of multi-country EOR programmes shows that companies operating in 5-15 EU countries typically need a dedicated AI compliance workstream that coordinates with their EOR's local employment expertise rather than delegating entirely to the EOR.

The practical solution is a GEMO approach, Global Employment Management and Operations, where a single provider manages your employment infrastructure across all jurisdictions while you maintain centralised AI governance. This prevents the fragmentation that occurs when different EOR providers in different countries give conflicting advice about AI compliance.

Can the EOR indemnify me?

Your EOR can contractually agree to indemnify you for losses arising from AI compliance failures. But an indemnity only shifts financial risk between the parties. It doesn't change who the regulator pursues for the underlying breach.

If you're the deployer under the EU AI Act and you fail to implement required human oversight, the regulator can pursue you directly. Your indemnity claim against the EOR is a separate commercial matter that doesn't affect your regulatory exposure. The regulator doesn't care about your contractual allocation of risk.

This is a crucial distinction that many mid-market companies miss. Contractual protections are valuable for managing commercial risk between you and your EOR. But they're not a substitute for actual compliance. You can't outsource your deployer obligations through contract any more than you can outsource your GDPR controller obligations.

What you can do is structure your EOR relationship so that responsibilities are clearly allocated and both parties have the operational capacity to meet their obligations. If your EOR is the deployer for certain AI-enabled functions they provide, they should have the compliance infrastructure to meet deployer obligations. If you're the deployer for AI systems you procure, you need your own compliance programme.

The indemnity becomes relevant when something goes wrong despite both parties' compliance efforts, or when one party's failure causes loss to the other. It's a backstop, not a compliance strategy.

What's the practical next step?

Start by mapping every AI system that touches your EU-employed workforce. Include recruitment tools, performance management systems, workforce analytics platforms, and any automated decision-making in HR processes. For each system, identify who has authority over its use, who configures it, who reviews its outputs, and who makes final decisions based on those outputs.

Then review your EOR contracts. Do they address AI compliance at all? Do they allocate deployer obligations between you and the EOR? Do they include country-specific provisions for jurisdictions with additional local requirements? If the answer to any of these is no, you have a gap to close.

For companies using EOR across multiple EU countries, consider whether your current provider structure supports coherent AI governance. Fragmented EOR relationships with different providers in different countries make it harder to implement consistent AI policies and maintain the documentation that regulators expect.

Teamed's GEMO framework provides one relationship across all employment models and jurisdictions, which simplifies AI governance by giving you a single point of coordination for the employment law layer while you maintain control of your AI deployment decisions. The graduation model means that as your needs evolve, whether from EOR to entity or from basic compliance to sophisticated AI governance, you don't need to rebuild your provider relationships.

If you'd like a second pair of eyes on that picture, Talk to an Expert. A named specialist will walk through your current EOR setup with you, show you where liability sits across each EU country, and help you decide what to tighten in your contracts and policies next. The honest answer, always.

Who's liable under the EU AI Act if I hire through an EOR in 2026?

Last updated: 22nd April 2026

The EU AI Act is now in force, and if you're employing people in the EU through an Employer of Record, you're probably wondering who carries the compliance burden. Your EOR is the legal employer on paper. But you're the one deciding whether to use AI tools in hiring, performance reviews, and workforce decisions. So when the regulator comes knocking, whose door do they knock on?

The honest answer is that liability doesn't sit neatly with one party. Under Regulation (EU) 2024/1689, obligations attach to specific roles, and in an EOR arrangement, those roles can be split across multiple organisations. The company that deploys an AI system for employment decisions is the deployer under the Act, regardless of who signs the employment contract. That distinction matters more than most EOR contracts currently acknowledge.

Below, we walk through how liability actually splits between you and your EOR, where your contract is likely leaving gaps, and what to do differently when you're operating across several EU countries with layered local rules.

What actually changes in an EOR setup

The Act treats AI used for recruitment, selection, promotion, termination, and performance monitoring as high-risk.

In practice, that means much of the HR tooling you already use sits in the heavily regulated tier, not the lighter transparency-only tier—with 79% of European firms now using algorithmic management tools to instruct, monitor or evaluate employees.

A deployer is the party using an AI system under its own authority to make employment decisions. In a typical EOR arrangement, that's the client company choosing who to hire or promote. The EOR, running payroll and managing local employment compliance, usually isn't.

Fines are calculated as the higher of a fixed euro figure or up to 7% of worldwide annual turnover. Your whole group's revenue, not just the EU subsidiary's, is what sits on the line.

The Act applies to AI systems used on EU-based workers even when the tool is bought and managed from the UK or the US. That's the pattern we see most often with companies hiring through an EOR.

On pricing, our EOR fee is $599 per employee per month. Salary and statutory employer costs show up as separate lines on your invoice rather than being blended into a percentage of payroll. The same honesty we want on your invoices is the honesty we expect in the AI schedule of your contract.

Indemnities between you and your EOR can help you recover money after something goes wrong. They don't change who the regulator writes to first.

Who is the "deployer" under the EU AI Act?

The deployer is the natural or legal person that uses an AI system under its authority. In employment contexts, that's the party making or materially influencing hiring, promotion, termination, or performance decisions using AI outputs. The key word is authority. Whoever authorises the AI system's use and acts on its recommendations is the deployer.

Here's where EOR arrangements get complicated. Your EOR is the legal employer of your EU-based workers. They run payroll, handle statutory benefits, and manage local employment compliance. But they're not typically the party deciding whether to use an AI screening tool to filter candidates or an AI performance system to flag underperformers.

You are. When you procure an AI recruitment tool, configure its parameters, and use its outputs to decide who gets hired or promoted, you're the deployer under the EU AI Act. The fact that the EOR signs the employment contract doesn't change that. The Act follows the decision-making authority, not the employment paperwork.

This creates a practical split that most EOR contracts don't address. The EOR handles employment law compliance. You handle AI deployment compliance. And if your contract doesn't explicitly allocate these responsibilities, both parties may assume the other is covering it.

Where does EOR liability start and stop?

The EOR's liability centres on their role as legal employer. They're responsible for compliant employment contracts, correct payroll calculations, statutory benefits administration, and local labour law adherence. If German works council consultation requirements apply, the EOR handles that. If French termination procedures require specific documentation, the EOR manages it.

But the EU AI Act creates a separate compliance layer that sits alongside employment law. When you use AI for employment-related decisions affecting EU workers, you trigger deployer obligations regardless of the EOR relationship. These include implementing appropriate human oversight, ensuring the AI system is used according to the provider's instructions, and maintaining logs of the system's operation.

The EOR doesn't automatically inherit these obligations just because they're the legal employer. The Act assigns duties based on who controls and uses the AI system, not who employs the worker. If you're the one deciding to deploy an AI tool and acting on its outputs, you're the one with deployer obligations.

That said, the lines can blur. If your EOR provides an AI-enabled HR platform as part of their service and uses it to make employment decisions on your behalf, they may become the deployer for those specific functions. The analysis depends on who has authority over the AI system's use in each workflow.

What about tools the EOR provides?

This is where mid-market companies often get caught out. Many EOR platforms now include AI-powered features for candidate screening, onboarding automation, or performance analytics. When your EOR provides these tools and uses them in their capacity as legal employer, they may be the deployer for those specific functions.

But here's the catch. If you're the one configuring the tool, reviewing its outputs, and making final decisions based on AI recommendations, you may still be the deployer even though the EOR owns the platform. The EU AI Act looks at who exercises authority over the AI system's use, not who holds the software licence.

Consider a scenario where your EOR provides an AI screening tool that ranks candidates. If the EOR runs the tool and presents you with a shortlist, they're likely the deployer. But if you access the tool directly, set the screening criteria, and use the rankings to make hiring decisions, you're likely the deployer despite using the EOR's platform.

This distinction matters because deployer obligations include implementing human oversight measures, monitoring the system's operation, and suspending use if you identify risks. You can't delegate these obligations to your EOR through contract alone if you're the party actually deploying the system.

What should the EOR contract say?

Your EOR contract needs an AI responsibilities schedule that explicitly allocates EU AI Act duties between you and your EOR. Most current EOR agreements were drafted before the Act came into force and don't address AI compliance at all. That's a gap you need to close.

The schedule should specify which party is responsible for each AI-enabled HR workflow. For any AI system the EOR provides, the contract should clarify whether the EOR or the client is the deployer based on who has authority over the system's use. For AI systems you procure independently, the contract should confirm you retain deployer obligations.

Key provisions to include are documentation requirements, specifying who maintains records of AI system use and makes them available for regulatory inspection. Human oversight allocation matters too, clarifying who implements and monitors the required human review of AI outputs. Notification procedures should establish how each party informs the other of AI-related compliance concerns or incidents.

The contract should also address what happens when the EOR's local employment obligations intersect with your AI deployment obligations. In Germany, for example, works council consultation may be required before implementing AI monitoring tools. Your contract should clarify whether the EOR handles this consultation as part of their employment compliance role or whether you need to coordinate directly.

Teamed's approach is to include AI responsibilities as a standard schedule in every EOR master service agreement, with country-specific addenda that address local labour law requirements affecting AI deployment. Based on Teamed's work with mid-market companies across multiple EU jurisdictions, this structure prevents the assumption gaps that leave both parties exposed.

What if I use an EOR across multiple jurisdictions?

Multi-jurisdiction EOR arrangements multiply the complexity. The EU AI Act provides a baseline, but individual member states can layer additional requirements on top. German works council rights, French employee consultation obligations, and Dutch data protection interpretations all affect how you can deploy AI in employment contexts.

The challenge is that the same AI-enabled hiring process may trigger different local requirements in each country where you have EOR-employed workers. Your German team may require works council consultation before you implement an AI screening tool. Your French team may need individual employee notification. Your Dutch team may face stricter limits on automated decision-making under local GDPR interpretation.

A single workflow, many countries problem emerges. You need a governance model that standardises your AI-enabled hiring approach while accommodating country-specific constraints. That means documenting your AI deployment policies at the group level, then creating country-specific implementation guides that address local requirements.

Your EOR can help with the local employment law layer, but they can't manage your AI governance for you. Teamed's analysis of multi-country EOR programmes shows that companies operating in 5-15 EU countries typically need a dedicated AI compliance workstream that coordinates with their EOR's local employment expertise rather than delegating entirely to the EOR.

The practical solution is a GEMO approach, Global Employment Management and Operations, where a single provider manages your employment infrastructure across all jurisdictions while you maintain centralised AI governance. This prevents the fragmentation that occurs when different EOR providers in different countries give conflicting advice about AI compliance.

Can the EOR indemnify me?

Your EOR can contractually agree to indemnify you for losses arising from AI compliance failures. But an indemnity only shifts financial risk between the parties. It doesn't change who the regulator pursues for the underlying breach.

If you're the deployer under the EU AI Act and you fail to implement required human oversight, the regulator can pursue you directly. Your indemnity claim against the EOR is a separate commercial matter that doesn't affect your regulatory exposure. The regulator doesn't care about your contractual allocation of risk.

This is a crucial distinction that many mid-market companies miss. Contractual protections are valuable for managing commercial risk between you and your EOR. But they're not a substitute for actual compliance. You can't outsource your deployer obligations through contract any more than you can outsource your GDPR controller obligations.

What you can do is structure your EOR relationship so that responsibilities are clearly allocated and both parties have the operational capacity to meet their obligations. If your EOR is the deployer for certain AI-enabled functions they provide, they should have the compliance infrastructure to meet deployer obligations. If you're the deployer for AI systems you procure, you need your own compliance programme.

The indemnity becomes relevant when something goes wrong despite both parties' compliance efforts, or when one party's failure causes loss to the other. It's a backstop, not a compliance strategy.

What's the practical next step?

Start by mapping every AI system that touches your EU-employed workforce. Include recruitment tools, performance management systems, workforce analytics platforms, and any automated decision-making in HR processes. For each system, identify who has authority over its use, who configures it, who reviews its outputs, and who makes final decisions based on those outputs.

Then review your EOR contracts. Do they address AI compliance at all? Do they allocate deployer obligations between you and the EOR? Do they include country-specific provisions for jurisdictions with additional local requirements? If the answer to any of these is no, you have a gap to close.

For companies using EOR across multiple EU countries, consider whether your current provider structure supports coherent AI governance. Fragmented EOR relationships with different providers in different countries make it harder to implement consistent AI policies and maintain the documentation that regulators expect.

Teamed's GEMO framework provides one relationship across all employment models and jurisdictions, which simplifies AI governance by giving you a single point of coordination for the employment law layer while you maintain control of your AI deployment decisions. The graduation model means that as your needs evolve, whether from EOR to entity or from basic compliance to sophisticated AI governance, you don't need to rebuild your provider relationships.

If you'd like a second pair of eyes on that picture, Talk to an Expert. A named specialist will walk through your current EOR setup with you, show you where liability sits across each EU country, and help you decide what to tighten in your contracts and policies next. The honest answer, always.

TABLE OF CONTENTS

Take a look
at the latest articles