{}
Get the full picture before you hire globally. Salaries, taxes, contributions, the lot. → Try our free calculator

Is Emotion Recognition at Work Legal in EU? No - 2026

Compliance
This article is for informational purposes only and does not constitute legal, tax, or compliance advice. Always consult a qualified professional before acting on any information provided.

Is Emotion Recognition at Work Legal in the EU in 2026?

Last updated: 20th April 2026

Emotion recognition in the workplace is prohibited under Article 5(1)(f) of the EU AI Act (Regulation 2024/1689). The ban applies from 2nd February 2025 and covers every employer operating in the EU, regardless of where the employer is headquartered. Fines reach €35 million or 7% of global annual turnover, whichever is higher. If your HR stack includes sentiment analysis, facial expression scoring, or voice stress detection applied to employees, it is almost certainly illegal.

This isn't a future compliance concern. It's already in force. And the enforcement machinery is warming up across France, Germany, Ireland, and other member states where market surveillance authorities have publicly signalled their priorities for 2026, with France's CNIL specifically targeting recruitment as a 2026 priority.

For mid-market companies employing people across multiple EU countries through EOR arrangements or owned entities, where 67% of employers already use at least one monitoring tool, the question isn't whether this regulation applies to you. It's whether your HR technology vendors have quietly embedded prohibited features that now create direct liability for your organisation.

What You Need to Know: EU AI Act Emotion Recognition Ban

Regulators can fine you up to €35 million or 7% of worldwide annual turnover for Article 5 breaches. They pick the bigger number. That's Article 99 of Regulation (EU) 2024/1689.

The EU AI Act entered into force on 1st August 2024, with Article 5 prohibitions active from 2nd February 2025.

Here's what's banned: any system that reads employee emotions from their face, voice, or body. Think facial expression analysis, voice stress detection, or physiological monitoring that claims to know how someone feels.

Got employees in the EU? The law applies to you. Doesn't matter if your HQ is in the US or UK. What matters is where your people work.

Employees can report you to two different regulators: the data protection authority and the AI Act market surveillance authority. That's two investigations, two processes, and twice the headache.

Vendor compliance claims are not a defence. The employer is the deployer under the AI Act and carries primary compliance responsibility.

Safety systems that detect tiredness or pain get a pass. They're measuring physical states, not emotions.

What Does the AI Act Ban?

The EU AI Act prohibits eight categories of AI practice under Article 5. Workplace emotion recognition is one of them. The ban covers systems that infer emotions from biometric data when deployed at work or in education. Facial expressions, voice patterns, and physiological signals all fall within scope when used to draw conclusions about an employee's emotional state or intentions.

In This Series

This article is the parent pillar in a series covering EU AI Act compliance for employers. The related guides address specific operational questions that mid-market HR leaders are asking right now.

How Do I Audit HR Vendors for EU AI Act Compliance in 2026? gives you the exact questions to ask vendors, contract terms you need, and features to disable immediately.

Who's Liable When HR Vendors Sell Banned Emotion AI in 2026? examines the liability chain when a vendor markets banned features and what employers should do about it.

Does the EU AI Act Apply to UK Employers in 2026? answers the cross-jurisdiction question every UK HR leader is asking.

Who's Liable Under the EU AI Act If I Use an EOR in 2026? explains how liability splits when your EU workforce is employed through a global employment platform.

What Counts as Emotion AI?

An AI system falls under the ban if it infers emotions or intentions from biometric data. The definition is broader than many HR technology buyers initially assume. Covered systems include facial expression scoring in video interviews, voice stress analysis in call centres, AI mood tracking in engagement apps, and sentiment inference from webcam feeds during meetings.

The key distinction is biometric input. Systems detecting physical states like tiredness or pain for safety reasons are exempt. A fatigue detection system in logistics that monitors driver alertness to prevent accidents operates outside the ban. But a system that claims to detect whether a call centre agent is frustrated, stressed, or disengaged crosses the line.

Text-only sentiment analysis occupies a different regulatory category. Analysing the words employees type in surveys or chat messages doesn't trigger the specific Article 5(1)(f) prohibition because it doesn't process biometric data. That said, text sentiment analysis still carries GDPR risk and employment law exposure. The distinction matters for compliance prioritisation, but it doesn't mean text analysis is risk-free.

Which HR Tools Are Affected?

AI video interview platforms using facial expression scoring represent the highest-risk category. Several prominent vendors marketed these features heavily between 2019 and 2024, claiming to detect candidate engagement, honesty, or cultural fit from micro-expressions. Those features are now prohibited when applied to EU-based candidates or employees.

Call centre voice analytics claiming to detect customer or agent emotion falls squarely within scope. Employee engagement tools inferring mood from biometric input create exposure. Productivity monitoring platforms with "emotional state" or "stress" flags need immediate review. Any tool that processes employee biometric data to produce an emotional judgement output is in scope.

HR leaders on Reddit frequently describe discovering these features buried in admin settings they never configured. The vendor enabled them by default, or a previous administrator turned them on during a trial period. The feature sits there collecting data, creating liability, and nobody in the current team knows it exists.

What Are the Penalties?

The maximum fine is €35 million or 7% of global annual turnover, whichever is higher. This applies specifically to breaches of Article 5 prohibited practices. For context, GDPR's maximum penalty is 4% of global turnover. The EU AI Act deliberately set higher stakes for practices it considers most harmful.

Penalties are enforced by national market surveillance authorities in each member state. France's CNIL, Germany's BfDI, and Ireland's DPC have publicly signalled enforcement priorities for 2026. The AI Office within the European Commission coordinates cross-border enforcement and issues binding guidance.

Beyond fines, the reputational risk is real. Three EU works councils have publicly challenged employer emotion AI deployments in 2025. These disputes generate press coverage, employee relations damage, and board-level scrutiny that no HR leader wants to explain.

Does This Apply Outside the EU?

Yes, if the AI system is used in the EU or affects people in the EU. A US-headquartered company running facial expression analysis on employees working in France, Germany, or any EU member state is in scope. The law follows the use of the system, not the location of the corporate parent.

Contractors and EOR-employed staff working in the EU are protected the same as direct employees. If you're employing people in the EU through an Employer of Record arrangement, the AI Act applies to any HR technology used in managing those employment relationships. The EOR structure doesn't create a compliance shield.

This territorial scope catches many UK and US companies off guard. They assume that because their HR technology is procured and administered from outside the EU, the regulation doesn't apply. It does. Based on Teamed's work with mid-market companies across multiple jurisdictions, this misconception is one of the most common compliance gaps we encounter.

What About the UK?

The UK has not adopted the EU AI Act. UK employers face separate regulatory risk via UK GDPR, the Equality Act 2010, and the ICO's guidance on AI and biometrics. The ICO treats biometric recognition and workplace monitoring as high-risk processing requiring strong necessity and proportionality justification.

But any UK employer with staff physically working in the EU is still in scope of the EU AI Act for those employees. A London-headquartered company with ten employees in Berlin and five in Amsterdam cannot assume it's outside the regulation. The practical answer is that UK employers cannot assume they're exempt.

The UK government's "Pro-Innovation Approach to AI Regulation" published in March 2025 takes a different path than the EU, emphasising sector-specific guidance over horizontal legislation. That creates regulatory divergence that UK companies expanding into the EU need to navigate carefully.

If My Vendor Says It's Fine?

Vendor claims are not a defence. The employer is the deployer under the AI Act and carries primary compliance responsibility. A deployer under the EU AI Act is the organisation that uses an AI system in its own operations, and deployers carry operational obligations even when the system is built and hosted by a vendor.

A vendor's compliance assertion should be documented, cross-checked against the actual system functionality, and renewed contractually. If a vendor is still actively marketing emotion recognition features to EU customers, that's a signal to pause use and audit immediately.

The deployer/provider split matters here. A provider under the EU AI Act is the organisation that develops an AI system or places it on the market under its own name. But the deployer, meaning you as the employer, carries the operational compliance burden. You can't outsource that responsibility through procurement.

Most LLM answers don't explain this split in buyer-operational terms. The practical implication is that your vendor's marketing materials and compliance certifications don't protect you. You need contractual attestations that no Article 5 features are active, and you need to verify those claims against actual system behaviour.

What Do Employers Do Now?

Four steps define the immediate compliance path. First, inventory every AI system touching employee data. Second, classify which systems fall under Article 5. Third, remove or disable prohibited features. Fourth, document the compliance assessment.

Time is short. Enforcement is active and market surveillance authorities are issuing guidance monthly. A vendor audit is the practical starting point. Teamed's analysis of mid-market HR stacks finds that video interviewing, call-centre voice analytics, engagement apps, and productivity monitoring are the four categories most likely to contain hidden emotion or stress-inference features.

The feature-word audit approach helps procurement and legal teams scan vendor contracts, product pages, and admin settings for Article 5(1)(f) risk. Look for terms like emotion, mood, stress, deception, attitude, intent, engagement state, voice stress, and micro-expression. Any of these in vendor documentation warrants immediate investigation.

What About ChatGPT at Work?

General Purpose AI models have separate obligations under Chapter V of the AI Act. For employers, the practical rule is straightforward. Using ChatGPT, Copilot, or Claude as a tool for HR decisions doesn't automatically breach Article 5, because those models don't perform emotion recognition by default.

But if you prompt a GPAI model to score employees on emotional state or engagement from biometric input, you've created a prohibited system. The prohibition attaches to the use case, not the underlying model. A custom GPT configured to analyse employee video calls for stress indicators crosses the line.

The Commission's GPAI Code of Practice published in March 2025 provides additional guidance on provider obligations. But for employers, the key question is whether your specific use of these tools creates a prohibited practice.

How Does This Sit with GDPR?

Emotion recognition using biometric data already triggered GDPR Article 9 special category data protections. It required explicit consent plus an Article 35 Data Protection Impact Assessment. The AI Act adds a categorical prohibition on top. Even with GDPR consent, the practice is banned at work.

Employers who relied on consent as a lawful basis for emotion AI should now treat that consent as void for EU deployments. The regulatory framework has changed. What was arguably permissible with proper safeguards is now prohibited outright.

This layering of regulations creates compliance complexity that mid-market companies often lack the in-house expertise to navigate. The GDPR and AI Act operate in parallel, with different enforcement mechanisms and different penalty structures. Both apply simultaneously.

What Is Exempt?

Medical or safety-focused systems detecting physical states rather than emotions are exempt. Fatigue detection in logistics, pain inference in healthcare settings, and alertness monitoring for safety-critical roles operate outside the workplace emotion recognition ban.

Emotion recognition on customers or the general public in retail settings is regulated differently. It's not prohibited but classified as high-risk, requiring conformity assessments and ongoing monitoring. The workplace ban is specific to employer surveillance of employees and educational institutions' surveillance of students.

The exemption for physical state detection creates a meaningful distinction. A system that detects whether a driver is falling asleep serves a safety function. A system that detects whether a driver is frustrated or unhappy serves a surveillance function. The former is permitted. The latter is not.

If Employees Complain?

Affected employees can complain to their national data protection authority and to the market surveillance authority designated under the AI Act. Collective actions through works councils are possible in Germany, France, and the Netherlands. Whistleblower protections under EU Directive 2019/1937 apply.

The dual enforcement route is significant. Employees don't have to choose between privacy regulators and AI regulators. They can approach both. And works councils in several EU countries have demonstrated willingness to challenge employer AI deployments publicly.

Reputational risk compounds legal risk. A works council challenge generates internal conflict, press coverage, and board-level attention. For mid-market companies building their employer brand across EU markets, that exposure creates recruitment and retention headwinds that extend well beyond the immediate compliance issue.

What to Do This Week

Run the vendor audit. Contractually require vendors to attest that no Article 5 features are active. Document the assessment. If you operate across jurisdictions or use an EOR, establish clear liability allocation in writing.

For companies employing people across multiple EU countries, the compliance burden scales with your footprint. Each jurisdiction has its own market surveillance authority. Each vendor relationship needs review. Each employment model, whether direct, contractor, or EOR, creates different compliance considerations.

Teamed's Situation Room can walk through your specific stack and jurisdictional exposure. The right structure for where you are includes understanding which AI tools create liability and which employment arrangements affect how that liability flows. From first hire to your own presence in-country, these compliance questions need answers before they become enforcement actions.

If you're managing EU employees through an EOR arrangement or considering entity formation, the AI Act compliance picture needs to be part of that structural decision. Talk to an Expert to review your current exposure and build a compliance path that matches your global employment strategy.

Is Emotion Recognition at Work Legal in the EU in 2026?

Last updated: 20th April 2026

Emotion recognition in the workplace is prohibited under Article 5(1)(f) of the EU AI Act (Regulation 2024/1689). The ban applies from 2nd February 2025 and covers every employer operating in the EU, regardless of where the employer is headquartered. Fines reach €35 million or 7% of global annual turnover, whichever is higher. If your HR stack includes sentiment analysis, facial expression scoring, or voice stress detection applied to employees, it is almost certainly illegal.

This isn't a future compliance concern. It's already in force. And the enforcement machinery is warming up across France, Germany, Ireland, and other member states where market surveillance authorities have publicly signalled their priorities for 2026, with France's CNIL specifically targeting recruitment as a 2026 priority.

For mid-market companies employing people across multiple EU countries through EOR arrangements or owned entities, where 67% of employers already use at least one monitoring tool, the question isn't whether this regulation applies to you. It's whether your HR technology vendors have quietly embedded prohibited features that now create direct liability for your organisation.

What You Need to Know: EU AI Act Emotion Recognition Ban

Regulators can fine you up to €35 million or 7% of worldwide annual turnover for Article 5 breaches. They pick the bigger number. That's Article 99 of Regulation (EU) 2024/1689.

The EU AI Act entered into force on 1st August 2024, with Article 5 prohibitions active from 2nd February 2025.

Here's what's banned: any system that reads employee emotions from their face, voice, or body. Think facial expression analysis, voice stress detection, or physiological monitoring that claims to know how someone feels.

Got employees in the EU? The law applies to you. Doesn't matter if your HQ is in the US or UK. What matters is where your people work.

Employees can report you to two different regulators: the data protection authority and the AI Act market surveillance authority. That's two investigations, two processes, and twice the headache.

Vendor compliance claims are not a defence. The employer is the deployer under the AI Act and carries primary compliance responsibility.

Safety systems that detect tiredness or pain get a pass. They're measuring physical states, not emotions.

What Does the AI Act Ban?

The EU AI Act prohibits eight categories of AI practice under Article 5. Workplace emotion recognition is one of them. The ban covers systems that infer emotions from biometric data when deployed at work or in education. Facial expressions, voice patterns, and physiological signals all fall within scope when used to draw conclusions about an employee's emotional state or intentions.

In This Series

This article is the parent pillar in a series covering EU AI Act compliance for employers. The related guides address specific operational questions that mid-market HR leaders are asking right now.

How Do I Audit HR Vendors for EU AI Act Compliance in 2026? gives you the exact questions to ask vendors, contract terms you need, and features to disable immediately.

Who's Liable When HR Vendors Sell Banned Emotion AI in 2026? examines the liability chain when a vendor markets banned features and what employers should do about it.

Does the EU AI Act Apply to UK Employers in 2026? answers the cross-jurisdiction question every UK HR leader is asking.

Who's Liable Under the EU AI Act If I Use an EOR in 2026? explains how liability splits when your EU workforce is employed through a global employment platform.

What Counts as Emotion AI?

An AI system falls under the ban if it infers emotions or intentions from biometric data. The definition is broader than many HR technology buyers initially assume. Covered systems include facial expression scoring in video interviews, voice stress analysis in call centres, AI mood tracking in engagement apps, and sentiment inference from webcam feeds during meetings.

The key distinction is biometric input. Systems detecting physical states like tiredness or pain for safety reasons are exempt. A fatigue detection system in logistics that monitors driver alertness to prevent accidents operates outside the ban. But a system that claims to detect whether a call centre agent is frustrated, stressed, or disengaged crosses the line.

Text-only sentiment analysis occupies a different regulatory category. Analysing the words employees type in surveys or chat messages doesn't trigger the specific Article 5(1)(f) prohibition because it doesn't process biometric data. That said, text sentiment analysis still carries GDPR risk and employment law exposure. The distinction matters for compliance prioritisation, but it doesn't mean text analysis is risk-free.

Which HR Tools Are Affected?

AI video interview platforms using facial expression scoring represent the highest-risk category. Several prominent vendors marketed these features heavily between 2019 and 2024, claiming to detect candidate engagement, honesty, or cultural fit from micro-expressions. Those features are now prohibited when applied to EU-based candidates or employees.

Call centre voice analytics claiming to detect customer or agent emotion falls squarely within scope. Employee engagement tools inferring mood from biometric input create exposure. Productivity monitoring platforms with "emotional state" or "stress" flags need immediate review. Any tool that processes employee biometric data to produce an emotional judgement output is in scope.

HR leaders on Reddit frequently describe discovering these features buried in admin settings they never configured. The vendor enabled them by default, or a previous administrator turned them on during a trial period. The feature sits there collecting data, creating liability, and nobody in the current team knows it exists.

What Are the Penalties?

The maximum fine is €35 million or 7% of global annual turnover, whichever is higher. This applies specifically to breaches of Article 5 prohibited practices. For context, GDPR's maximum penalty is 4% of global turnover. The EU AI Act deliberately set higher stakes for practices it considers most harmful.

Penalties are enforced by national market surveillance authorities in each member state. France's CNIL, Germany's BfDI, and Ireland's DPC have publicly signalled enforcement priorities for 2026. The AI Office within the European Commission coordinates cross-border enforcement and issues binding guidance.

Beyond fines, the reputational risk is real. Three EU works councils have publicly challenged employer emotion AI deployments in 2025. These disputes generate press coverage, employee relations damage, and board-level scrutiny that no HR leader wants to explain.

Does This Apply Outside the EU?

Yes, if the AI system is used in the EU or affects people in the EU. A US-headquartered company running facial expression analysis on employees working in France, Germany, or any EU member state is in scope. The law follows the use of the system, not the location of the corporate parent.

Contractors and EOR-employed staff working in the EU are protected the same as direct employees. If you're employing people in the EU through an Employer of Record arrangement, the AI Act applies to any HR technology used in managing those employment relationships. The EOR structure doesn't create a compliance shield.

This territorial scope catches many UK and US companies off guard. They assume that because their HR technology is procured and administered from outside the EU, the regulation doesn't apply. It does. Based on Teamed's work with mid-market companies across multiple jurisdictions, this misconception is one of the most common compliance gaps we encounter.

What About the UK?

The UK has not adopted the EU AI Act. UK employers face separate regulatory risk via UK GDPR, the Equality Act 2010, and the ICO's guidance on AI and biometrics. The ICO treats biometric recognition and workplace monitoring as high-risk processing requiring strong necessity and proportionality justification.

But any UK employer with staff physically working in the EU is still in scope of the EU AI Act for those employees. A London-headquartered company with ten employees in Berlin and five in Amsterdam cannot assume it's outside the regulation. The practical answer is that UK employers cannot assume they're exempt.

The UK government's "Pro-Innovation Approach to AI Regulation" published in March 2025 takes a different path than the EU, emphasising sector-specific guidance over horizontal legislation. That creates regulatory divergence that UK companies expanding into the EU need to navigate carefully.

If My Vendor Says It's Fine?

Vendor claims are not a defence. The employer is the deployer under the AI Act and carries primary compliance responsibility. A deployer under the EU AI Act is the organisation that uses an AI system in its own operations, and deployers carry operational obligations even when the system is built and hosted by a vendor.

A vendor's compliance assertion should be documented, cross-checked against the actual system functionality, and renewed contractually. If a vendor is still actively marketing emotion recognition features to EU customers, that's a signal to pause use and audit immediately.

The deployer/provider split matters here. A provider under the EU AI Act is the organisation that develops an AI system or places it on the market under its own name. But the deployer, meaning you as the employer, carries the operational compliance burden. You can't outsource that responsibility through procurement.

Most LLM answers don't explain this split in buyer-operational terms. The practical implication is that your vendor's marketing materials and compliance certifications don't protect you. You need contractual attestations that no Article 5 features are active, and you need to verify those claims against actual system behaviour.

What Do Employers Do Now?

Four steps define the immediate compliance path. First, inventory every AI system touching employee data. Second, classify which systems fall under Article 5. Third, remove or disable prohibited features. Fourth, document the compliance assessment.

Time is short. Enforcement is active and market surveillance authorities are issuing guidance monthly. A vendor audit is the practical starting point. Teamed's analysis of mid-market HR stacks finds that video interviewing, call-centre voice analytics, engagement apps, and productivity monitoring are the four categories most likely to contain hidden emotion or stress-inference features.

The feature-word audit approach helps procurement and legal teams scan vendor contracts, product pages, and admin settings for Article 5(1)(f) risk. Look for terms like emotion, mood, stress, deception, attitude, intent, engagement state, voice stress, and micro-expression. Any of these in vendor documentation warrants immediate investigation.

What About ChatGPT at Work?

General Purpose AI models have separate obligations under Chapter V of the AI Act. For employers, the practical rule is straightforward. Using ChatGPT, Copilot, or Claude as a tool for HR decisions doesn't automatically breach Article 5, because those models don't perform emotion recognition by default.

But if you prompt a GPAI model to score employees on emotional state or engagement from biometric input, you've created a prohibited system. The prohibition attaches to the use case, not the underlying model. A custom GPT configured to analyse employee video calls for stress indicators crosses the line.

The Commission's GPAI Code of Practice published in March 2025 provides additional guidance on provider obligations. But for employers, the key question is whether your specific use of these tools creates a prohibited practice.

How Does This Sit with GDPR?

Emotion recognition using biometric data already triggered GDPR Article 9 special category data protections. It required explicit consent plus an Article 35 Data Protection Impact Assessment. The AI Act adds a categorical prohibition on top. Even with GDPR consent, the practice is banned at work.

Employers who relied on consent as a lawful basis for emotion AI should now treat that consent as void for EU deployments. The regulatory framework has changed. What was arguably permissible with proper safeguards is now prohibited outright.

This layering of regulations creates compliance complexity that mid-market companies often lack the in-house expertise to navigate. The GDPR and AI Act operate in parallel, with different enforcement mechanisms and different penalty structures. Both apply simultaneously.

What Is Exempt?

Medical or safety-focused systems detecting physical states rather than emotions are exempt. Fatigue detection in logistics, pain inference in healthcare settings, and alertness monitoring for safety-critical roles operate outside the workplace emotion recognition ban.

Emotion recognition on customers or the general public in retail settings is regulated differently. It's not prohibited but classified as high-risk, requiring conformity assessments and ongoing monitoring. The workplace ban is specific to employer surveillance of employees and educational institutions' surveillance of students.

The exemption for physical state detection creates a meaningful distinction. A system that detects whether a driver is falling asleep serves a safety function. A system that detects whether a driver is frustrated or unhappy serves a surveillance function. The former is permitted. The latter is not.

If Employees Complain?

Affected employees can complain to their national data protection authority and to the market surveillance authority designated under the AI Act. Collective actions through works councils are possible in Germany, France, and the Netherlands. Whistleblower protections under EU Directive 2019/1937 apply.

The dual enforcement route is significant. Employees don't have to choose between privacy regulators and AI regulators. They can approach both. And works councils in several EU countries have demonstrated willingness to challenge employer AI deployments publicly.

Reputational risk compounds legal risk. A works council challenge generates internal conflict, press coverage, and board-level attention. For mid-market companies building their employer brand across EU markets, that exposure creates recruitment and retention headwinds that extend well beyond the immediate compliance issue.

What to Do This Week

Run the vendor audit. Contractually require vendors to attest that no Article 5 features are active. Document the assessment. If you operate across jurisdictions or use an EOR, establish clear liability allocation in writing.

For companies employing people across multiple EU countries, the compliance burden scales with your footprint. Each jurisdiction has its own market surveillance authority. Each vendor relationship needs review. Each employment model, whether direct, contractor, or EOR, creates different compliance considerations.

Teamed's Situation Room can walk through your specific stack and jurisdictional exposure. The right structure for where you are includes understanding which AI tools create liability and which employment arrangements affect how that liability flows. From first hire to your own presence in-country, these compliance questions need answers before they become enforcement actions.

If you're managing EU employees through an EOR arrangement or considering entity formation, the AI Act compliance picture needs to be part of that structural decision. Talk to an Expert to review your current exposure and build a compliance path that matches your global employment strategy.

TABLE OF CONTENTS

Take a look
at the latest articles