European banks are confronting mounting regulatory overlap as they work to comply with the EU Artificial Intelligence Act alongside established regimes such as GDPR and DORA. The article argues that the problem is less about unclear rules and more about duplicated workstreams, fragmented ownership, and parallel audits that arise when similar requirements are handled in isolation. With the Artificial Intelligence Act introducing new obligations for risk categorisation, transparency, data lineage, and continuous monitoring, banks are being pushed beyond traditional software governance and into a stricter, risk-based model for high-risk applications.
The timing of implementation creates additional pressure. While the legislation took effect in August 2024, harmonised standards from European Standardisation Organisations are still being finalised and are now expected to be delayed until 2026. That leaves institutions balancing the risk of moving too early and misaligning controls against moving too late and missing the August 2026 enforcement deadline for high-risk Artificial Intelligence. Meanwhile, many of the Act’s foundational requirements overlap with GDPR and DORA, particularly in logging, monitoring, incident reporting, and data governance. Yet banks frequently maintain separate owners, systems, and audit trails, which drives duplication and slows Artificial Intelligence pilots. The readiness gap is visible in sentiment: only 11 percent of European banks feel prepared for the Artificial Intelligence Act and 70 percent say they are partially ready, while overlapping EU regulations are estimated to impose €150 billion in annual compliance costs across industries, according to data from the European Commission.
To turn this complexity into advantage, the article proposes an integrated governance fabric that rationalises controls across frameworks. The four-step approach starts with mapping current and planned Artificial Intelligence use cases to the Act’s risk categories and clarifying organisational roles, then links applicable articles to affected functions with lightweight classification for new initiatives. Next, banks should assess gaps and overlaps with existing controls, focusing on convergence areas like data classification, incident response, and third-party risk. A provisional, flexible set of Artificial Intelligence controls follows, covering items such as dataset bias testing, retraining frequency, and explainability thresholds, with the expectation that only minor updates will be needed when CEN-CENELEC guidance is finalised. Finally, institutions should merge overlapping controls into a single, traceable framework that connects requirements to evidence and can extend to regimes such as NIS2 or ISO 27001. This unified fabric helps banks deploy Artificial Intelligence faster, allocate resources more efficiently, and strengthen risk management without creating yet another silo.