Managing general agents are entering 2026 in a market where generative Artificial Intelligence is deeply embedded across underwriting, claims triage, bordereaux cleansing, broker communications and back-office workflows, but where the use of ungoverned tools is creating escalating operational, regulatory and cyber risks. The article highlights that generative Artificial Intelligence has become the fastest-growing form of unmanaged technology inside insurance organisations, accelerating the problem of “shadow IT” as staff informally adopt tools such as ChatGPT, Claude and Microsoft Copilot to speed up their work. For MGAs, which often operate with lean structures and decentralised decisions, this behaviour amplifies exposure because sensitive data like personal information, policy wording, pricing assumptions or actuarial insights can be fed into tools that may store, reproduce or surface that information far outside its intended context.
The author details concrete examples of how unmanaged Artificial Intelligence can fail: integrated Artificial Intelligence tools have pulled sensitive files into a user’s workspace due to misconfigured access rights, shared Artificial Intelligence chats have appeared in public search results, and “prompt injection” attacks are increasing as malicious actors manipulate tools into disclosing information. Artificial Intelligence hallucinations, where systems produce incorrect or fabricated content with unwarranted confidence, are described as a major operational threat, and the article stresses that in a regulated environment an Artificial Intelligence generated misstatement is still a regulatory breach. Regulators such as the FCA are sharpening their focus so that data integrity, fair value, explainability and model oversight extend directly into Artificial Intelligence usage, including scrutiny of who uses tools, what data leaves the organisation, and how Artificial Intelligence influenced decisions are validated. The FCA’s recent Artificial Intelligence Live Testing initiative is presented as part of a broader push toward rigorous verification, strong privacy controls and clear accountability to prevent data poisoning and model hallucinations.
To respond, the author argues that banning Artificial Intelligence is neither realistic nor desirable; instead MGAs should operationalise it safely by establishing structured governance. Throughout 2025, firms have started to roll out acceptable use policies, mandatory employee training, redaction rules, bans on personal accounts, and centralised management of approved platforms and licences. For MGAs, recommended measures include a documented Artificial Intelligence usage policy aligned with GDPR, Consumer Duty and internal risk appetite, centralised control of all Artificial Intelligence platforms with no unmanaged deployments, mandatory training before access, strict data handling rules that prohibit uploading regulated or proprietary content to public tools, regular audits to identify shadow Artificial Intelligence via security platforms, fact checking protocols for outputs, and role based access to ensure users cannot surface documents they are not authorised to see. The article concludes that Artificial Intelligence governance is a core management responsibility comparable to complaints handling or conduct risk, and urges MGAs to adopt a clear stance that Artificial Intelligence is permitted only with strong controls, bringing its use out of the shadows to protect data, partners and regulatory standing while enabling future innovation.
