MGAs urged to bring shadow Artificial Intelligence under strict governance in 2026

Managing general agents are entering 2026 under pressure to harness generative Artificial Intelligence while imposing robust controls on shadow usage, data protection and regulatory compliance. The article argues that banning tools is unrealistic, and instead calls for structured governance frameworks that bring Artificial Intelligence out of the shadows.

Managing general agents are entering 2026 in a market where generative Artificial Intelligence is deeply embedded across underwriting, claims triage, bordereaux cleansing, broker communications and back-office workflows, but where the use of ungoverned tools is creating escalating operational, regulatory and cyber risks. The article highlights that generative Artificial Intelligence has become the fastest-growing form of unmanaged technology inside insurance organisations, accelerating the problem of “shadow IT” as staff informally adopt tools such as ChatGPT, Claude and Microsoft Copilot to speed up their work. For MGAs, which often operate with lean structures and decentralised decisions, this behaviour amplifies exposure because sensitive data like personal information, policy wording, pricing assumptions or actuarial insights can be fed into tools that may store, reproduce or surface that information far outside its intended context.

The author details concrete examples of how unmanaged Artificial Intelligence can fail: integrated Artificial Intelligence tools have pulled sensitive files into a user’s workspace due to misconfigured access rights, shared Artificial Intelligence chats have appeared in public search results, and “prompt injection” attacks are increasing as malicious actors manipulate tools into disclosing information. Artificial Intelligence hallucinations, where systems produce incorrect or fabricated content with unwarranted confidence, are described as a major operational threat, and the article stresses that in a regulated environment an Artificial Intelligence generated misstatement is still a regulatory breach. Regulators such as the FCA are sharpening their focus so that data integrity, fair value, explainability and model oversight extend directly into Artificial Intelligence usage, including scrutiny of who uses tools, what data leaves the organisation, and how Artificial Intelligence influenced decisions are validated. The FCA’s recent Artificial Intelligence Live Testing initiative is presented as part of a broader push toward rigorous verification, strong privacy controls and clear accountability to prevent data poisoning and model hallucinations.

To respond, the author argues that banning Artificial Intelligence is neither realistic nor desirable; instead MGAs should operationalise it safely by establishing structured governance. Throughout 2025, firms have started to roll out acceptable use policies, mandatory employee training, redaction rules, bans on personal accounts, and centralised management of approved platforms and licences. For MGAs, recommended measures include a documented Artificial Intelligence usage policy aligned with GDPR, Consumer Duty and internal risk appetite, centralised control of all Artificial Intelligence platforms with no unmanaged deployments, mandatory training before access, strict data handling rules that prohibit uploading regulated or proprietary content to public tools, regular audits to identify shadow Artificial Intelligence via security platforms, fact checking protocols for outputs, and role based access to ensure users cannot surface documents they are not authorised to see. The article concludes that Artificial Intelligence governance is a core management responsibility comparable to complaints handling or conduct risk, and urges MGAs to adopt a clear stance that Artificial Intelligence is permitted only with strong controls, bringing its use out of the shadows to protect data, partners and regulatory standing while enabling future innovation.

52

Impact Score

FAMU expands artificial intelligence and data science across disciplines

Florida A&M University is scaling an Artificial Intelligence and data science initiative that blends research, ethics, and workforce preparation, backed by new infrastructure and national partnerships. Faculty and students across STEM and non STEM fields are using these tools to transform teaching, learning, and community impact.

OVHcloud AI Endpoints offers secure generative Artificial Intelligence APIs

OVHcloud AI Endpoints provides serverless generative Artificial Intelligence APIs with a focus on data privacy, open-weight models, and vendor flexibility. The platform targets developers and businesses looking to integrate large language, voice, document, and image models without managing infrastructure.

Kioxia and Sandisk extend Yokkaichi joint venture through 2034

Kioxia and Sandisk have extended their long running flash memory joint venture in Japan, securing production at the Yokkaichi and Kitakami plants through December 31, 2034 to meet rising demand from generative Artificial Intelligence workloads.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.