EU and UK hiring laws raise risks for US employers using Artificial Intelligence

US employers using Artificial Intelligence hiring tools in Europe face tighter rules, growing scrutiny, and potentially serious penalties. Regulators in the European Union and the United Kingdom are pushing companies to test for bias, document systems, and ensure meaningful human oversight.

American companies that use Artificial Intelligence to screen job applicants are facing a fast-changing legal landscape in Europe. Rules in the European Union and the United Kingdom are tightening, and companies with operations in Europe can no longer treat Artificial Intelligence hiring software as a simple plug-in product. Regulators in both jurisdictions are responding to concerns that automated hiring tools can discriminate against job seekers without clear visibility into how those outcomes are produced.

In the European Union, the EU’s Artificial Intelligence Act classifies most hiring-related Artificial Intelligence tools as “high-risk.” That includes software that screens resumes, ranks candidates, or evaluates performance. Companies must document how their systems work, test them for bias, and ensure that a real human being is involved in final decisions rather than merely approving algorithmic recommendations. A key 2023 European Court of Justice ruling, known as the SCHUFA decision, further raises the stakes. The ruling found that generating an automated score can itself count as an automated decision if others rely heavily on that score. In hiring, that means an Artificial Intelligence-generated candidate ranking could be treated as a binding automated decision under EU law, even when a manager formally makes the final choice.

The United Kingdom has taken a different path, but the direction of travel is similar. The Data (Use and Access) Act 2025, which is being phased in now, amends existing UK data protection rules rather than replacing them. It targets “significant decisions” made solely by automated means, a category that clearly covers Artificial Intelligence-driven hiring. The Information Commissioner’s Office has signaled close oversight and has raised concerns about tools that may disadvantage protected groups. The regulator expects companies to test for bias, explain how their systems operate, and keep humans involved in decision-making.

Fisher Phillips advises US employers to take four immediate steps. Companies should map every Artificial Intelligence tool used in hiring and determine its legal category. They should require vendors to provide bias testing documentation and conduct their own testing as well. Human reviewers must be able to genuinely override Artificial Intelligence outputs rather than approve them automatically. Employers also need to comply with local notice and transparency obligations in each country, including rules in Germany, France, Spain, Italy, Austria, and the Netherlands that require consultation with workers’ councils before deploying Artificial Intelligence-based HR tools.

Compliance demands are expected to intensify. The UK’s new data law is still being phased in, with more provisions set to take effect in the coming months. The EU Artificial Intelligence Act’s obligations for high-risk systems are also rolling out in stages through 2026 and into 2027. Enforcement pressure is building, and companies that operate across borders are being pushed to understand how their models were trained, how fairness is monitored over time, and how local legal requirements differ across jurisdictions.

58

Impact Score

Mercor links cyberattack to LiteLLM compromise

Mercor said a cyberattack was tied to the compromise of LiteLLM, prompting wider discussion about supply chain risk and the limits of compliance programs. The incident also led LiteLLM to change its compliance processes and move from Delve to Vanta for compliance certifications.

Rowhammer attack targets NVIDIA GPUs with GDDR6

New research shows Rowhammer exploits can target NVIDIA GPUs using GDDR6 memory and extend beyond the graphics subsystem into host CPU memory. The attacks can corrupt GPU page tables and lead to full system compromise.

EU keeps Artificial Intelligence regulation in focus despite delays

The EU has delayed parts of its landmark Artificial Intelligence regulations in its Digital Omnibus, but the Artificial Intelligence Act remains a live compliance priority. Regulators are also sharpening their focus on wider Artificial Intelligence risks and abuses across Europe and the U.K.

Hugging Face launches TRL v1.0 for LLM fine-tuning

Hugging Face has released TRL v1.0 to standardize the post-training workflow behind large language models. The framework packages alignment methods, configuration tools, and scalable training into a more predictable engineering process.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.