The May 2025 Global Artificial Intelligence Regulatory Update presents a comprehensive review of the latest governance, privacy, and policy developments across major global jurisdictions. Central banks have emerged as crucial actors, with the Bank for International Settlements emphasizing the necessity for robust data governance frameworks, collaboration with the private sector, and improved Artificial Intelligence literacy to address privacy, security, and ethical risks in financial services. Institutions are called to balance computational resources with security and cost, advance data quality, and promote transparent, accountable adoption.
In Asia, Hong Kong’s financial sector is rapidly adopting generative Artificial Intelligence, according to an industry report indicating that 75% of surveyed institutions are implementing or piloting such technologies, a figure set to rise to 87% within five years. Challenges remain concerning accuracy, data privacy, and resource constraints, but regulatory engagement aims to ensure responsible innovation. The Hong Kong Privacy Commissioner supplemented this by publishing a checklist for organizations on generative Artificial Intelligence use, covering lawful use, data inputs, security, and employee guidance. Businesses are encouraged to align policies, train staff, strengthen data protections, and audit Artificial Intelligence adoption proactively.
European regulators continue to lead on Artificial Intelligence governance. The European Commission updated its guidelines for responsible Artificial Intelligence use in research, promoting reliability, honesty, respect, and accountability, all grounded in research integrity standards. Concurrently, the European Data Protection Board published an in-depth report on privacy risks and mitigation strategies for large language models under the EU Artificial Intelligence Act and GDPR, urging organizations to conduct continuous risk assessments and ensure transparency. The European Commission’s launch of its AI Continent Action Plan emphasizes infrastructure investment, data sharing, integration in industry and public services, workforce development, and support for regulatory compliance. A key update is the third draft of the General-Purpose AI Code of Practice, with compliance set to become a presumption of conformity for model providers under the Act.
The Middle East, notably the UAE, is embracing Artificial Intelligence for legislative drafting, aiming to streamline lawmaking by 70%. However, experts caution about transparency, potential biases, and the interpretative capacity of artificial systems. In the UK, the Bank of England stressed the importance of monitoring Artificial Intelligence in the financial system to manage systemic risks, while Parliament is again debating an Artificial Intelligence Regulation Private Members’ Bill. Although the passage of such a Bill remains unlikely, the continued political debate will likely shape government approaches.
The US is prioritizing both federal and state-level oversight. The White House issued new memoranda requiring all federal agencies to update their procedures, manage risks, and appoint Chief Artificial Intelligence Officers, underscoring a shift to a pro-innovation stance. Texas advanced the revised Responsible AI Governance Act, which introduces regulatory requirements for high-risk systems, consumer rights, and a sandbox for innovation. At the national level, the bipartisan CREATE AI Act seeks to democratize Artificial Intelligence resources and bolster US global competitiveness. Additional initiatives include Department of Energy plans to accelerate Artificial Intelligence infrastructure by allocating federal sites for data centers, and NIST’s latest guidance on adversarial machine learning, informing risk mitigation and security practices. Collectively, these measures signal unprecedented momentum, coordination, and scrutiny around Artificial Intelligence regulation worldwide.