American companies that use Artificial Intelligence to screen job applicants are facing a fast-changing legal landscape in Europe. Rules in the European Union and the United Kingdom are tightening, and companies with operations in Europe can no longer treat Artificial Intelligence hiring software as a simple plug-in product. Regulators in both jurisdictions are responding to concerns that automated hiring tools can discriminate against job seekers without clear visibility into how those outcomes are produced.
In the European Union, the EU’s Artificial Intelligence Act classifies most hiring-related Artificial Intelligence tools as “high-risk.” That includes software that screens resumes, ranks candidates, or evaluates performance. Companies must document how their systems work, test them for bias, and ensure that a real human being is involved in final decisions rather than merely approving algorithmic recommendations. A key 2023 European Court of Justice ruling, known as the SCHUFA decision, further raises the stakes. The ruling found that generating an automated score can itself count as an automated decision if others rely heavily on that score. In hiring, that means an Artificial Intelligence-generated candidate ranking could be treated as a binding automated decision under EU law, even when a manager formally makes the final choice.
The United Kingdom has taken a different path, but the direction of travel is similar. The Data (Use and Access) Act 2025, which is being phased in now, amends existing UK data protection rules rather than replacing them. It targets “significant decisions” made solely by automated means, a category that clearly covers Artificial Intelligence-driven hiring. The Information Commissioner’s Office has signaled close oversight and has raised concerns about tools that may disadvantage protected groups. The regulator expects companies to test for bias, explain how their systems operate, and keep humans involved in decision-making.
Fisher Phillips advises US employers to take four immediate steps. Companies should map every Artificial Intelligence tool used in hiring and determine its legal category. They should require vendors to provide bias testing documentation and conduct their own testing as well. Human reviewers must be able to genuinely override Artificial Intelligence outputs rather than approve them automatically. Employers also need to comply with local notice and transparency obligations in each country, including rules in Germany, France, Spain, Italy, Austria, and the Netherlands that require consultation with workers’ councils before deploying Artificial Intelligence-based HR tools.
Compliance demands are expected to intensify. The UK’s new data law is still being phased in, with more provisions set to take effect in the coming months. The EU Artificial Intelligence Act’s obligations for high-risk systems are also rolling out in stages through 2026 and into 2027. Enforcement pressure is building, and companies that operate across borders are being pushed to understand how their models were trained, how fairness is monitored over time, and how local legal requirements differ across jurisdictions.
