Proposed amendments to the UK online safety regime are set to clarify how generative artificial intelligence is regulated, accelerating the liability phase for technology providers and raising structural questions for insurers. The Online Safety Act was introduced in 2019 and passed in 2023, with enforcement only beginning last year, leaving a gap between legislative timelines and the rapid adoption of artificial intelligence chatbots across digital platforms. Government signals point to closing perceived loopholes, including clearer rules for one-to-one chatbot interactions and stronger data retention duties where a child has died, reflecting growing concern over artificial intelligence driven harms involving vulnerable users.
Legal experts warn that tightening statutory duties around artificial intelligence will sharpen debates over the scope of duty of care owed by technology companies when their systems contribute to real-world harm. As artificial intelligence systems become more autonomous, insurer exposure can be characterised as negligence, product liability, regulatory breach or failure of service, with enforcement actions potentially triggering investigations and financial penalties that do not fit neatly within traditional directors and officers or errors and omissions categories. Insurers are being urged to clarify which legal obligations and categories of harm they intend to cover, and to ensure that policy coverage and wording keep pace with fast-evolving technology and regulatory expectations across multiple jurisdictions.
Cross-border deployment of artificial intelligence products is expected to increase the complexity and volume of regulatory investigations, with multiple agencies such as Ofcom and the ICO potentially acting within a single country as global oversight intensifies. Underwriters face uncertainty around territorial triggers, reporting obligations and defence cost exposure, and must also confront their own growing reliance on technology in risk assessment, as many artificial intelligence firms are still classified generically as software or technology companies within e-traded platforms that may not capture nuanced risk profiles. While some market participants believe existing errors and omissions and directors and officers frameworks are broadly prepared for autonomous artificial intelligence output risk, they caution that long-tail liability classes will only reveal true exposure through claims outcomes, and expect a split between insurers that move to exclude and those that innovate and adapt, with disciplined risk selection rather than “adding an extra 20/30% to the premium” likely to drive pricing strategy.
