Regulators in the European Union and the United Kingdom are increasing scrutiny of algorithmic discrimination in employment decisions by combining new Artificial Intelligence specific rules with existing data protection and anti-discrimination laws. For Us employers using recruitment, screening, promotion, performance evaluation, or worker monitoring systems across those markets, compliance now depends on more than vendor assurances. Hiring systems must be understood, tested, and adapted to the legal requirements of each jurisdiction.
In the EU, the compliance burden is especially high because most employment and worker management systems fall into the high-risk category under the EU Artificial Intelligence Act. High-risk systems must meet requirements before deployment, including documented risk management, data governance and quality controls, technical documentation, logging, transparency, and meaningful human oversight. Article 10 of the AI Act focuses on data quality and bias, requiring that high-risk HR AI be trained, validated, and tested on data that are relevant, representative, sufficiently diverse, and “as free of errors as possible.” Organizations are also expected to conduct systematic bias testing, document mitigation efforts, and monitor performance over time.
These obligations sit alongside the GDPR, where Article 22 restricts decisions based solely on automated processing that produce legal or similarly significant effects, including hiring and termination. Employers relying heavily on automated scoring or ranking must ensure meaningful human involvement, explain the logic involved, and provide ways for candidates to challenge decisions. The European Court of Justice’s SCHUFA decision from 2023 reinforced a broad reading of Article 22 by treating a generated score as an automated decision when third parties rely heavily on it. In hiring, that creates added risk for employers that allow automated rankings to effectively determine who is interviewed or selected while a human merely approves the outcome.
The UK has taken a different path. The Data (Use and Access) Act 2025 updates the UK GDPR and Data Protection Act 2018 rather than copying the EU model. DUAA reforms automated decision-making rules by focusing on significant decisions taken solely by automated means, with the strictest limits applying where special category data are involved and safeguards are lacking. The law also simplifies some compliance duties and introduces limited recognized legitimate interests, which may make it easier for employers to rely on legitimate interests for Artificial Intelligence assisted screening and scoring. Even so, employers still need careful risk assessments, appropriate impact assessments, and safeguards around significant automated decisions.
Existing anti-discrimination rules remain fully relevant in the UK, including the Equality Act 2010. Regulators are also sharpening expectations around complaints handling, explainability, bias testing, and genuine human review. Employers are being pushed toward four core actions: inventory and classify HR Artificial Intelligence tools; require and perform data quality and bias testing; design workflows in which humans can truly review and override outputs; and follow local consultation and transparency requirements, including involving works councils in countries such as Germany, Spain, Italy, Austria, the Netherlands, and France where applicable.
