Generative artificial intelligence creating new errors and omissions risks: Lloyd’s

Lloyd’s Market Association warned that growing reliance on generative artificial intelligence by professional services firms is introducing new exposures for the international errors and omissions market, citing liability for mistakes, data breaches and regulatory challenges.

The Lloyd’s Market Association warned that growing reliance on generative artificial intelligence by professional services businesses is creating new risks for the international errors and omissions market, Business Insurance reported. The association’s concern was cited in coverage from Continuity Insurance & Risk Magazine, and Business Insurance linked to the original story. The report frames the trend as a market-level issue driven by increased deployment of generative artificial intelligence tools across professional services.

According to the coverage, some of the specific risks generative artificial intelligence poses for businesses include liability for mistakes, data breaches and regulatory challenges. Those items were highlighted as emerging exposures that could affect error and omission claims and underwriting for firms that incorporate generative artificial intelligence into client work or internal processes. The article presents these categories as the primary near-term concerns identified by the Lloyd’s Market Association and reported by Continuity Insurance & Risk Magazine.

The short notice does not provide further detail on several points. It does not identify which professional services sectors are most affected, it does not quantify potential losses or changes to underwriting, and it does not describe specific mitigation measures or insurer responses. Business Insurance included a link to the original Continuity Insurance & Risk Magazine story for readers seeking more detail: https://www.cirmagazine.com/cir/c2025091803.php. The Business Insurance item was published on Sep 19, 2025, and focuses on the association’s warning rather than detailed case studies or policy changes.

50

Impact Score

Nvidia denies report on Groq chip plans for China

Nvidia says a report that it is preparing Groq inferencing chips for shipment to China is “totally false,” even as interest in H200 sales to the country remains strong. The dispute highlights how closely watched Nvidia’s China strategy has become across training and inferencing hardware.

AMD targets desktop Artificial Intelligence PCs with Copilot+ chips

AMD has introduced the first desktop processors certified for Microsoft Copilot+, aiming to challenge Intel in x86 PCs as demand for on-device Artificial Intelligence computing rises. The company is also balancing that push with export limits that could constrain advanced chip sales in China.

Governance risk highlights from Infosecurity Magazine

Governance and risk coverage centers on regulation, compliance, cybersecurity policy, and the growing role of Artificial Intelligence in enterprise security. Recent headlines point to pressure on critical infrastructure, standards updates, insider threat guidance, and concerns over guardrails for large language models.

Vals publishes public enterprise language model benchmarks

Vals lists a broad set of public enterprise benchmarks spanning law, finance, healthcare, math, education, academics, coding, and beta agent tasks. The index highlights which models currently lead specific enterprise-focused evaluations and how widely each benchmark has been tested.

MIT method spots overconfident Artificial Intelligence models

MIT researchers developed a way to detect when large language models are confidently wrong by comparing their answers with outputs from similar models. The combined uncertainty measure outperformed standard techniques across a range of tasks and may help reduce unreliable responses.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.