Regulatory frameworks for artificial intelligence remain behind the technology´s rapid adoption, especially as generative models become central to many organizations´ modernization efforts. The proliferation of generative artificial intelligence has spurred widespread implementation initiatives, even as most development has occurred without established regulatory guardrails. Regulatory bodies are now hurrying to address this gap and bring order to a landscape described by analysts as chaotic, with over 1,000 pieces of proposed artificial intelligence regulation introduced globally between early 2024 and early 2025. This surge demands urgent action from chief information officers to ensure compliance amid a tangled and evolving set of rules.
The risks of moving ahead without compliance are substantial. Notable incidents of artificial intelligence gone awry include privacy breaches, security lapses, bias, factual errors, and particularly ´hallucinations´—instances where generative systems produce outputs disconnected from reality. Recent research, including from OpenAI, indicates that newer generative models may exhibit hallucinations even more frequently than earlier versions. These errors, especially when amplified by bias present in training data or algorithms, can have damaging social consequences, notably in regulated sectors like healthcare, law enforcement, finance, and hiring. As these problems mount, governments and regulators worldwide are stepping up oversight, with some—like the European Union—enacting comprehensive horizontal legislation such as the EU Artificial Intelligence Act. Other regions, such as the UK and US, are developing a mix of sector-specific and overarching strategies to address risks, but are unlikely to simply mirror the EU´s approach.
The emerging global framework presents a bewildering patchwork: the US leads with 82 distinct artificial intelligence policies and strategies, the EU follows with 63, and the UK has 61, according to AIPRM research. While landmark legislation like the EU Artificial Intelligence Act sets a sweeping baseline, the US´s executive orders and industry-specific measures, as well as evolving international guidelines from institutions like the OECD and UN, add to the complexity. Compliance is further complicated by the lack of a globally accepted definition for artificial intelligence, meaning organizations must navigate not only a multiplicity of rules but fundamental ambiguities. Unlike data protection frameworks such as GDPR, regulations for artificial intelligence are nascent, lacking decades of precedent and clarity. This demands that organizations not only track where artificial intelligence is deployed internally, but also maintain ongoing diligence as legislation evolves and novel risks surface.
To stay compliant, organizations should start by identifying all artificial intelligence deployments and reviewing adherence to existing regulations, such as GDPR, while closely monitoring new laws like the EU Artificial Intelligence Act. Transparency, risk assessments, and proactive ´responsible artificial intelligence´ practices are emerging as both board-level priorities and regulatory expectations. Experts emphasize the importance of building solutions with robust guardrails from the outset to avoid unforeseen negative impacts. As artificial intelligence legislation matures, compliance will hinge not just on technological adaptation but on a culture of governance, continual risk evaluation, and an agile response to rapidly shifting regulatory expectations.