Trust concerns slow Artificial Intelligence projects at medium and large firms

Trust concerns are increasingly delaying Artificial Intelligence rollouts at medium and large businesses in the UK and US. Data privacy, security, explainability, and model transparency now weigh more heavily on buying decisions than regulatory uncertainty.

Trust concerns are slowing Artificial Intelligence adoption across medium and large businesses, with many organisations pausing projects despite continued pressure to generate value from the technology. Gong found that 58% of medium and large businesses have stalled Artificial Intelligence projects, with a trust gap cited as the main reason. The findings were based on a survey of 2,056 business leaders in the UK and US, alongside Gong Labs’ analysis of more than 25 million sales interactions processed on its platform.

Among UK respondents, 52% said Artificial Intelligence projects had stalled, compared with 63% in the US. On average, 46% of planned Artificial Intelligence investment had been paused – 47% in the UK and 44% in the US. Trust concerns ranked above regulatory uncertainty in shaping whether businesses proceed with spending. Data privacy and security were cited by 34% of respondents as the main barrier to adoption, followed by explainability at 30% and model transparency at 28%. Regulatory uncertainty followed at 27%.

Businesses also expressed concern about weak returns and competitive pressure. Three-quarters of respondents said their organisations were not getting enough value from Artificial Intelligence, including 70% in the UK and 80% in the US. Gong Labs’ analysis of sales calls pointed to the same pattern. One in four calls referenced security, while uncertainty around training data and how Artificial Intelligence systems learn emerged as the most commonly discussed privacy and security issues. Buyers are increasingly focused on whether vendors can explain their systems, protect data, and set clear limits on use.

Explainability was identified as the leading assurance that would help businesses adopt Artificial Intelligence tools with confidence, cited by 26% overall and 27% in the UK. The ability to explain guardrails for protecting data followed at 25%. Security guarantees built into products and third-party audits or certification were each cited by 23%. A further 22% said transparency over training data use and model logic would help build confidence. The results point to a tougher sales environment for software suppliers, particularly in regulated sectors where questions around auditability, security design, and output generation can determine whether projects move beyond trials and into broader deployment.

52

Impact Score

Finance officials raise banking security concerns over Anthropic’s mythos model

Anthropic’s Claude Mythos has prompted urgent discussions among finance ministers, central bankers and banks over the risk that advanced cyber capabilities could expose weaknesses in critical financial systems. Governments and financial institutions are being given early access to test and strengthen defences before any broader release.

Uk delays Artificial Intelligence copyright reform

The UK government has postponed immediate copyright reform for Artificial Intelligence, leaving developers, creatives, and rightsholders to operate under existing law. Licensing, transparency, digital replicas, and future litigation are now set to shape the next phase of policy.

Memory architecture is central to autonomous llm agents

Memory design, not just model choice, determines whether autonomous agents can sustain context, learn from experience, and stay reliable over time. A practical framework centers on how information is written, managed, and read across multiple memory types.

OpenAI expands cyber model access through trusted program

OpenAI has introduced GPT-5.4-Cyber as a restricted model for cybersecurity professionals, widening access through its Trusted Access for Cyber program. The release highlights both the defensive value and misuse risks of more capable Artificial Intelligence tools in security work.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.