Regulatory expectations for adaptive artificial intelligence in medical devices

Regulators in the US, EU, and UK are defining expectations for adaptive artificial intelligence in medical technologies, with emphasis on change control, post market surveillance, and cybersecurity. Companies are being pushed to design predictable update mechanisms and continuous monitoring around learning systems.

Regulatory authorities in the US, EU, and UK are converging on structured pathways for medical technologies that use adaptive artificial intelligence, aiming to keep innovation aligned with safety and performance obligations. Typical regulatory routes still follow established device classifications and conformity assessment mechanisms, but sponsors are expected to explain how learning systems behave over time, how model updates are controlled, and how clinical performance is assured as software evolves. This places particular emphasis on predictable change processes, documentation of training and validation data, and alignment between declared intended use and real world behavior.

Post market surveillance expectations are being expanded for products using adaptive artificial intelligence, reflecting regulators’ concern that real time learning and frequent updates can shift performance after initial approval. Manufacturers are expected to implement continuous monitoring frameworks, with clearly defined metrics for safety, effectiveness, and data quality, and to collect and analyze field performance data in a structured way. Feedback from users, incident reports, and real world evidence need to feed into a formal surveillance plan that can trigger corrective and preventive actions when performance drifts, and that supports regular reporting obligations to competent authorities.

Adaptive artificial intelligence and change control are increasingly organized under concepts such as predetermined change control plans, which define in advance what kinds of model or software changes are allowed without a new full regulatory submission. Required elements typically include clear change boundaries, predefined validation methods, and risk management approaches that address both functional and cybersecurity impacts of updates. Cybersecurity is treated as a core safety element, with expectations that manufacturers design secure architectures, maintain vulnerability management processes, and ensure that any remote or automated updates to artificial intelligence systems are authenticated, traceable, and resilient against malicious interference.

65

Impact Score

Lockheed Martin tests Artificial Intelligence enhanced combat identification on F-35

Lockheed Martin has flight tested an Artificial Intelligence enhanced combat identification capability on the F-35, using a tactical model in flight to generate independent threat identifications on the pilot’s display. The Project Overwatch demonstration points to faster decision making and rapid software updates as key elements of future air combat.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.