AI compliance in pharma: EU, US and UK legislative insights

Explore how evolving Artificial Intelligence regulations in the EU, US, and UK are reshaping pharmaceutical compliance and operational strategies.

The pharmaceutical industry´s future is firmly intertwined with technological advancement, particularly the integration of artificial intelligence across platforms, tools and services. As companies adapt to this next wave of digital transformation, proactive alignment with emerging legislative frameworks becomes essential. Key strategies for pharmaceutical entities include evaluating their own Artificial Intelligence readiness, comprehensively understanding region-specific regulatory environments, developing ethical standards from the initial stages, and fostering collaboration across the sector and with regulators.

In the European Union, the EU Artificial Intelligence Act stands as a benchmark comprehensive law, classifying Artificial Intelligence systems by risk category—ranging from minimal to unacceptable—and outlining responsibilities for developers, importers, distributors and users. The act entered into force in August 2024, with its full enforcement staggered through to 2028. Phased requirements address everything from banning applications with unacceptable risk, governing general-purpose Artificial Intelligence models, and ensuring robust compliance for regulated products. The European Medicines Agency (EMA) has issued a 2025–2028 workplan aligning with both the EU Artificial Intelligence Act and the broader European Health Data Space. There is particular focus on data access, risk-based assessment and the implementation of human-centered approaches for Artificial Intelligence throughout the medicinal product lifecycle.

In the United States, the regulatory landscape consists of both federal and state-level activities. The Food and Drug Administration released draft guidance in January 2025 for developers of Artificial Intelligence-enabled medical devices, stressing transparency, control plans for algorithm updates, managing model bias, and post-market performance. This occurs alongside executive orders reinforcing the nation’s Artificial Intelligence infrastructure and calling for economic, security and workforce considerations. All 50 US states have enacted statutes covering critical infrastructure, transparency in automated decision-making, and worker protections related to Artificial Intelligence integration.

The United Kingdom continues to develop its own guidelines under the Medicines and Healthcare products Regulatory Agency (MHRA) banner. The MHRA, in response to the Regulatory Horizons Council, is developing new regulations for Artificial Intelligence as a medical device (AIaMD), including initiatives like the Artificial Intelligence regulatory sandbox—piloted with the National Health Service—to address practical deployment challenges and regulatory solutions. Notably, 2025 saw the introduction of enhanced post-market surveillance rules, prescribing increased data gathering and monitoring for medical devices leveraging Artificial Intelligence technology.

Given this complexity, pharmaceutical organizations must invest in talent, ongoing training, and adaptable compliance infrastructures to remain competitive and resilient. Ethical Artificial Intelligence practices, harmonized global compliance, and close engagement with authorities are now prerequisites for leveraging Artificial Intelligence safely and effectively in pharma. Regulatory consultancies, such as Celegence, support this journey by providing expertise in strategy, risk assessments, and implementation to help companies align operations with fast-evolving global standards.

👍
0
❤️
0
👏
0
😂
0
🎉
0
🎈
0

74

Impact Score

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.

Please check your email for a Verification Code sent to . Didn't get a code? Click here to resend