UK online safety overhaul sharpens artificial intelligence liability for insurers

Proposed changes to the UK Online Safety Act are tightening expectations on artificial intelligence platforms, forcing insurers to reconsider how they classify, underwrite and word coverage for emerging tech risks.

Proposed amendments to the UK online safety regime are set to clarify how generative artificial intelligence is regulated, accelerating the liability phase for technology providers and raising structural questions for insurers. The Online Safety Act was introduced in 2019 and passed in 2023, with enforcement only beginning last year, leaving a gap between legislative timelines and the rapid adoption of artificial intelligence chatbots across digital platforms. Government signals point to closing perceived loopholes, including clearer rules for one-to-one chatbot interactions and stronger data retention duties where a child has died, reflecting growing concern over artificial intelligence driven harms involving vulnerable users.

Legal experts warn that tightening statutory duties around artificial intelligence will sharpen debates over the scope of duty of care owed by technology companies when their systems contribute to real-world harm. As artificial intelligence systems become more autonomous, insurer exposure can be characterised as negligence, product liability, regulatory breach or failure of service, with enforcement actions potentially triggering investigations and financial penalties that do not fit neatly within traditional directors and officers or errors and omissions categories. Insurers are being urged to clarify which legal obligations and categories of harm they intend to cover, and to ensure that policy coverage and wording keep pace with fast-evolving technology and regulatory expectations across multiple jurisdictions.

Cross-border deployment of artificial intelligence products is expected to increase the complexity and volume of regulatory investigations, with multiple agencies such as Ofcom and the ICO potentially acting within a single country as global oversight intensifies. Underwriters face uncertainty around territorial triggers, reporting obligations and defence cost exposure, and must also confront their own growing reliance on technology in risk assessment, as many artificial intelligence firms are still classified generically as software or technology companies within e-traded platforms that may not capture nuanced risk profiles. While some market participants believe existing errors and omissions and directors and officers frameworks are broadly prepared for autonomous artificial intelligence output risk, they caution that long-tail liability classes will only reveal true exposure through claims outcomes, and expect a split between insurers that move to exclude and those that innovate and adapt, with disciplined risk selection rather than “adding an extra 20/30% to the premium” likely to drive pricing strategy.

53

Impact Score

UK mps open inquiry into artificial intelligence and edtech in education

UK mps have launched a cross party inquiry into how artificial intelligence and education technology are reshaping learning across early years, schools, colleges and universities, and how government should balance innovation with safeguards. The education committee will examine opportunities to improve teaching and workload alongside risks around inequality, privacy, safeguarding and assessment.

Most UK firms see Artificial Intelligence training gap as shadow tool use grows

New research finds that 6 in 10 UK businesses say employees lack comprehensive Artificial Intelligence training, even as shadow use of unapproved tools becomes widespread and investment surges. Executives warn that without stronger skills, governance and strategy, many organisations risk missing out on expected Artificial Intelligence returns.

COSO issues internal control roadmap for governing generative artificial intelligence

COSO has released governance guidance that applies its Internal Control-Integrated Framework to generative artificial intelligence, offering audit-ready control structures and implementation tools for organizations. The publication details capability-based risk mapping, aligned controls, and practical templates to help institutions manage emerging technology risks.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.