In April 2026, senior finance ministers, central bankers and regulators from multiple jurisdictions held urgent discussions at IMF meetings in Washington D.C. The focus was a new, unreleased Artificial Intelligence system known as Claude Mythos Preview, developed by Anthropic as part of its wider Claude Artificial Intelligence system. Governments are already treating Mythos as a potential systemic cyber risk capable of reshaping the threat landscape on which modern financial systems, and the insurance policies supporting them, depend.
Mythos is presented as part of a growing class of frontier Artificial Intelligence systems. Unlike conventional generative Artificial Intelligence tools, it is claimed to autonomously identify and exploit vulnerabilities in complex software environments with minimal human input. According to Anthropic and independent testing by the UK’s Artificial Intelligence Security Institute, preview versions surfaced thousands of previously unknown security flaws across major operating systems and web browsers, including weaknesses that had remained undetected for decades despite extensive testing. Anthropic chose not to release the system publicly and instead provided limited access to selected technology companies and financial institutions under Project Glasswing so critical infrastructure operators could test and remediate vulnerabilities before similar systems become widely available.
For cyber insurers, the main concern is that Artificial Intelligence-enabled cyber risks may be faster, more scalable, and more interconnected than previously anticipated. Aggregation risk has long been tied to shared dependencies such as cloud providers, operating systems, and widely deployed software. Mythos sharpens that concern by increasing the probability that a single latent vulnerability could be identified and exploited across large numbers of policies at near-simultaneous speed. That raises the prospect of more frequent and severe malicious cyber attacks and larger correlated loss scenarios across the market.
A central coverage issue is whether existing insurance policies respond to Artificial Intelligence-enabled cyber losses at all. In most cases, policies are silent. Most cyber insurance wordings do not expressly mention Artificial Intelligence, leaving many Artificial Intelligence-related losses non-affirmatively covered, neither expressly covered nor excluded, a position increasingly described as silent Artificial Intelligence. That ambiguity could drive disputes over causation and policy interpretation, including whether an Artificial Intelligence system should be treated as the cause of a cyber attack or merely an enabling factor, and how definitions such as security failure or malicious act apply when there is no direct human involvement.
The market response is beginning to split. Some insurers are affirming Artificial Intelligence-related risks through endorsements or revised wording, while others are adding more Artificial Intelligence-related exclusions. There is also a growing number of stand-alone Artificial Intelligence insurance policies designed specifically for these risks. What distinguishes Mythos from earlier systems is the degree of government and regulatory involvement, with authorities in the UK, US and India treating it as a matter requiring coordinated scrutiny. That could accelerate pricing discipline and product innovation across cyber insurance, especially around whether Artificial Intelligence-related risks are affirmatively covered, excluded, or moved into dedicated policies.
