More than a third of UK businesses unprepared for artificial intelligence risks

Despite recognising artificial intelligence as a top threat, many UK organisations still lack adequate policies and governance to tackle its risks effectively.

Despite widespread acknowledgment of artificial intelligence as a major risk, a significant proportion of UK businesses remain unprepared to combat its associated threats. Nearly 30% of organisations surveyed by CyXcel, a global cybersecurity consultancy, now count artificial intelligence among their top three concerns. Yet, 29% of respondents have only just begun to establish their first risk management strategy, and 31% report having no artificial intelligence governance policy in place at all.

This lack of preparedness exposes businesses to a host of dangers including data breaches, regulatory penalties, reputational damage, and severe operational disruptions. The rapidly evolving nature of artificial intelligence threats compounds the issue. CyXcel’s research found that almost one in five UK and US companies are ill-equipped to handle cyberattacks targeting artificial intelligence and machine learning models, such as data poisoning. Similarly, 16% acknowledge they are unprepared for deepfake or cloning security incidents, indicating a troubling disconnect between recognised risks and proactive protection.

In response to these challenges, CyXcel has introduced its Digital Risk Management (DRM) platform. The new tool aims to support organisations of all sizes and sectors in identifying and managing emerging digital risks, offering guidance on developing effective policies and governance frameworks. Megha Kumar, CyXcel’s chief product officer and geopolitical risk lead, noted the urgency: organisations are eager to leverage artificial intelligence yet lack clear strategies for mitigating threats. The DRM platform intends to fill this gap, especially for companies with limited in-house technical capabilities.

Edward Lewis, CyXcel’s CEO, highlighted the increasingly complex regulatory landscape, particularly for multinational firms. With measures like the EU’s Cyber Resilience Act mandating features such as automated security updates and mandatory incident reporting, and new UK laws expected soon, regulatory compliance is rising in significance. Keeping up with the proliferation of standards and government requirements will be critical as artificial intelligence risks become more prominent across business sectors.

66

Impact Score

Nvidia to sell fully integrated Artificial Intelligence servers

A report picked up on Tom’s Hardware and discussed on Hacker News says Nvidia is preparing to sell fully built rack and tray assemblies that include Vera CPUs, Rubin GPUs and integrated cooling, moving beyond supplying only GPUs and components for Artificial Intelligence workloads.

Navigating new age verification laws for game developers

Governments in the UK, European Union, the United States of America and elsewhere are imposing stricter age verification rules that affect game content, social features and personalization systems. Developers must adopt proportionate age-assurance measures such as ID checks, credit card verification or Artificial Intelligence age estimation to avoid fines, bans and reputational harm.

Large language models require a new form of oversight: capability-based monitoring

The paper proposes capability-based monitoring for large language models in healthcare, organizing oversight around shared capabilities such as summarization, reasoning, translation, and safety guardrails. The authors argue this approach is more scalable than task-based monitoring inherited from traditional machine learning and can reveal systemic weaknesses and emergent behaviors across tasks.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.