The AI Risk Gap: Business Coverage and a Changing Regulatory Landscape

Rapid Artificial Intelligence adoption is creating new risks for businesses, but insurance and regulations struggle to keep up.

Artificial intelligence (AI) is rapidly becoming integral to business operations across sectors, with 79% of surveyed companies already using the technology and many planning increased reliance in the near future. Applications span diverse areas, including data analytics, research, price modelling, and customer service, making AI a core driver of efficiency and innovation. Despite its rapid uptake, this technological surge is also introducing a fresh array of business risks, intensified by a global regulatory landscape that is evolving at different speeds and in varying directions.

Legislative frameworks worldwide are beginning to catch up. The EU has led with its comprehensive AI Act, classifying systems by their risk level and establishing a structured legal foundation intended to encourage responsible innovation. Canada’s efforts to implement a similar nationwide Artificial Intelligence and Data Act (AIDA) have stalled, resulting in a province-led regulatory patchwork. The United Kingdom has opted for a more flexible, sector-based regulatory approach, advocating for innovation by leveraging existing regulatory bodies rather than imposing sweeping new laws. Meanwhile, the United States manages AI risk through a mix of federal and state-level initiatives, and Australia is strengthening mandatory guardrails, especially for high-risk scenarios. For international enterprises, these fragmented and regionally distinct regulatory regimes represent a significant compliance challenge, further complicating the risk profile associated with artificial intelligence adoption.

The majority of businesses, however, remain underprepared for these evolving risks. Only 32% of those surveyed by CFC feel confident that their existing insurance policies adequately address exposures generated by artificial intelligence, from intellectual property disputes to data breaches and regulatory infractions. The ´AI risk gap´ thus highlights a market-wide lack of clarity in insurance coverage at a time when the use of the technology is nearly ubiquitous. Insurance providers such as CFC are responding by embedding explicit and implied protections for artificial intelligence-related risks across their policies in sectors including healthcare, finance, technology, and media, aiming to support innovation without exposing companies to undue uncertainty or liability. As regulatory scrutiny increases and AI use cases continue to evolve, the need for tailored, comprehensive insurance coverage becomes ever more critical to business resilience.

65

Impact Score

Sarvam artificial intelligence signs ₹10,000 crore deal with tamil nadu for sovereign artificial intelligence park

Sarvam artificial intelligence has signed a ₹10,000 crore memorandum of understanding with the tamil nadu government to build india’s first full stack sovereign artificial intelligence park, positioning the startup at the center of the country’s data sovereignty push. The project aims to combine government exclusive infrastructure with deep tech jobs and advanced model development for indian use cases.

Nvidia expands Drive Hyperion ecosystem for level 4 autonomy

Nvidia is broadening its Drive Hyperion ecosystem with new sensor, electronics and software partners, aiming to accelerate level 4-ready autonomous vehicles across passenger and commercial fleets. The company is pairing this hardware platform with new Artificial Intelligence models and a safety framework designed to support large-scale deployment.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.