Technology risk trends and priorities for 2026

Grant Thornton outlines how boards and internal audit teams should prepare for escalating technology risks in 2026, from cyber attacks and cloud complexity to Artificial Intelligence regulation and deepfake-enabled fraud. The analysis stresses that resilience, governance, and proactive assurance are now strategic imperatives, not optional enhancements.

Grant Thornton’s 2026 outlook argues that boards are being pushed to treat technology risk oversight as a core strategic duty, supported by the updated UK Corporate Governance Code and a new cyber governance code of practice. Cyber security remains the top enterprise risk, with 2025 marked by disruptive attacks that took several months to fully recover in some cases and exposed supply chain vulnerabilities, especially through ransomware. Internal audit and risk teams are urged to provide regular assurance over accelerated cyber transformation efforts, focus on identity access management as SIM-swapping and social engineering undermine traditional two factor authentication, and scrutinise third parties with high levels of IT access. From February 2026, internal audit functions subscribing to Institute of Internal Auditors global standards must demonstrate they have fully considered the topical requirements for cyber security in their audit universe and 2026 plans, while UK regulators signal that failure to meet expectations on cyber response and recovery could trigger enforcement action.

The article highlights that technology resilience and incident response have been tested by both cyber and non-cyber failures, such as high-profile retailer incidents, major cloud outages, and software update failures that cascaded across industries. Common weaknesses included poor communication, untested response playbooks, lack of a clear inventory of critical systems and data, and limited use of immutable backups. Organisations are being pushed to move from reactive recovery to proactive resilience, assessing end to end recovery capabilities against standards such as ISO 22301 or NIST SP 800-61, validating security information and event management and extended detection and response tooling, and embedding resilience into transformation programmes. In financial services, operational resilience rules require keeping important business services within impact tolerances under FCA PS21/3 and PRA SS1/21, with internal audit expected to cover mapping, testing, vulnerability management, incident response and regulatory notification processes.

Cloud governance and security emerge as another priority as research shows that spending on cloud infrastructure services grew by 28% year-over-year in the third quarter of 2025 alone and most organisations now operate multi-cloud environments across AWS, Microsoft Azure, and Google Cloud Platform. Misunderstanding the cloud shared responsibility model and misconfigurations are linked to security gaps, resilience issues, compliance failures, and spiralling costs, prompting calls for stronger governance frameworks and the use of FinOps to manage value for money. The rapid deployment of Generative Artificial Intelligence solutions hosted in the cloud adds new data governance and cost risks, while financial services firms face stricter regimes such as the UK’s SS6/24 and the EU’s digital operational resilience act, which make them accountable for the resilience of critical third parties. Internal audit is advised to look beyond provider reports to assess an organisation’s own controls across platforms, test preparedness for severe but plausible cloud failures, and integrate cost governance into audit plans, including for new Artificial Intelligence projects.

On Artificial Intelligence, the report notes that Generative and agentic Artificial Intelligence are reshaping innovation and operations but often fail to deliver impact at scale. A recent MIT report is cited stating that 95% of Generative Artificial Intelligence pilots fail to deliver a measurable business impact, reflecting poor integration and misaligned workflows. Agentic Artificial Intelligence systems that reason and act independently are rolling out faster than governance can keep up, creating fresh risks from data breaches, hallucinations, and unintended decisions. Internal audit and risk teams are encouraged to assess organisational readiness for Artificial Intelligence adoption, review use cases for value and ethics, evaluate third-party tools, apply black box auditing techniques, and keep pace with emerging regulatory standards, including expectations in UK financial services that Artificial Intelligence models be governed with the same rigour as other high-risk models under the senior managers regime.

The article devotes significant attention to evolving digital regulation. In 2025, the EU Artificial Intelligence Act and the UK’s Data (Use and Access) Act began reshaping how organisations manage Artificial Intelligence and personal data, introducing tiered, risk-based obligations that emphasise transparency, data provenance, and human oversight. Digital regulation risks expanded beyond financial services in 2025, affecting sectors such as retail, manufacturing, healthcare, and energy and making compliance a de facto requirement for market access. Penalties for non-compliance can reach Euro 35 million or 7% of global turnover, prompting calls for internal audit to monitor readiness for upcoming rules, audit data flows for lawful processing, and upgrade risk frameworks for digital-specific risks. Financial institutions must also juggle overlapping regimes, from data protection law to operational resilience and conduct standards for trading apps and online services.

Critical third-party and supply chain risk is described as a systemic concern, particularly where many organisations depend on a single non-substitutable provider such as a major cloud platform. The 2024 CrowdStrike outage and 2025 cyber-attacks that triggered reverse supply chain failures demonstrated that outsourcing does not outsource risk, and that fourth-party dependencies can have far-reaching consequences. New threats include attacks on cloud providers’ physical infrastructure and exploitation of zero-day vulnerabilities in widely used enterprise software. Internal audit is urged to help define third-party categorisation by criticality, perform concentration and systemic risk analysis, and move from periodic checks to continuous, Artificial Intelligence-enabled monitoring of critical suppliers’ controls. In financial services, PRA and FCA rules require board approval of material outsourcing, regulatory notifications, and robust exit plans.

Transformation programmes and the role of the project management office are shifting as organisations demand evidence that technology initiatives deliver measurable business value, rather than just meeting time and budget targets. In 2025, PMOs increasingly took responsibility for benefits definition, tracking and reporting, meaning internal audit must expand its remit to assess value realisation processes, governance structures, and data quality in value tracking. Regulators, particularly in UK financial services, are scrutinising whether major transformations deliver fair value under the consumer duty, with cases where nine UK banks logged 800+ hours of outages, Barclays paid £7.5 million, and Vocalink was fined £11.9 million for infrastructure weaknesses cited as context for heightened expectations and personal accountability for senior managers.

Data governance is presented as a foundational enabler for responsible Artificial Intelligence, with the UK’s Data (Use and Access) Act and the EU’s Data Act introducing new obligations around data portability, transparency, and automated decision-making that become enforceable by mid-2026. The rise of Artificial Intelligence has elevated data governance from a back-office function to a frontline risk discipline, making fragmented data and siloed ownership major vulnerabilities. Internal audit and risk functions are advised to review governance frameworks against UK and EU requirements, assess the accuracy and timeliness of data used in Artificial Intelligence models and analytics, test controls across policy, standards and data architecture, and check lifecycle management and vendor oversight. Regulators such as the FCA are intensifying scrutiny of data lineage and documentation, with several firms undergoing Section 166 reviews in 2025.

Zero trust security is described as moving from a strategic concept to baseline architecture as traditional perimeter models struggle with cloud, remote working and Artificial Intelligence-driven attacks. Over the last year, organisations have begun replacing VPNs with zero trust network access, using Artificial Intelligence for real-time threat detection and adaptive access, and expanding micro-segmentation to limit lateral movement. Internal audit should assess zero trust maturity with frameworks such as NIST 800-207, evaluate identity and access controls, review micro-segmentation and continuous monitoring, and ensure third-party access aligns with zero trust principles. In parallel, the threat from deepfakes and disinformation has surged. The UK’s National Cyber Security Centre reports deepfake use in fraud has surged 400% in 18 months, with an estimated 8 million deepfake videos occurring in 2025 (up from 500,000 in 2023) and human detection rates as low as 24% for high quality videos, while Gartner predicted that by 2026, 30% of enterprises will move beyond traditional ID verification methods. Projected losses from large-scale Generative Artificial Intelligence fraud are described as reaching USD 40 billion, and the EU Artificial Intelligence Act requires mandatory disclosure of Artificial Intelligence-generated content, including deepfakes.

The deepfake section stresses that the UK government considers these synthetic media “the greatest challenge of the online age” as they are used for fraud, reputational harm, and stock price manipulation. Internal audit and risk teams are told to ensure layered controls, including out-of-band confirmation for high-risk transactions, multi-factor authentication for executive decisions on collaboration platforms, and Artificial Intelligence-based detection tools benchmarked against initiatives such as the Alan Turing Institute and ACE trials. Voice authentication “safe phrases”, rapid communication protocols for corporate affairs and investor relations, and formal disinformation scenarios within market abuse surveillance are also recommended. Financial regulators including the FCA and US Securities and Exchange Commission warn that deepfakes and false information can manipulate securities and cryptocurrency markets, and UK financial institutions are expected to train staff to handle false solvency rumours while responding quickly and transparently under exchange rules.

68

Impact Score

What happens when artificial intelligence agents work together in financial decisions

Researchers at Featurespace’s innovation lab studied how teams of artificial intelligence agents behave when jointly assessing income and credit risk, finding that collaboration can unpredictably amplify or reduce bias. Their work highlights the need to test multi-agent systems as a whole, particularly in high-stakes financial use cases like fraud detection and lending.

Reducing online harms through radical platform transparency

Carolina Are argues that piecemeal laws and youth bans will not fix online harms, and that only radical transparency into social media business models and decision making can meaningfully challenge Big Tech power. She also warns that Europe’s ambiguous dependence on United States technology and Artificial Intelligence firms risks entrenching a technoimperialist status quo.

LangChain agents: tooling, middleware, and structured output

LangChain’s agent system combines language models, tools, and middleware to iteratively solve tasks, with support for dynamic models, tools, prompts, and structured output. The docs detail how to configure models, manage state, and extend behavior for production-ready Artificial Intelligence agents.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.