Rethinking technology regulation in the age of Artificial Intelligence risks

An editorial argues that Artificial Intelligence exposes the limits of generalist legal approaches and requires risk‑informed, adaptive, and harmonized regulation. It compares European Union, United States, United Kingdom, and China models and outlines research priorities for information systems scholars.

The editorial contends that Artificial Intelligence challenges Easterbrook’s “Law of the Horse,” arguing that general legal principles no longer suffice for opaque, autonomous systems with systemic impacts. It advances a risk-informed, technology-specific approach grounded in adaptiveness, proportionality, harmonization, and ethics. Framed by the pacing problem, where innovation outstrips oversight capacity, the editors propose a tripartite risk model and compare divergent regulatory logics across the European Union, United States, United Kingdom, and China. The piece calls for embedding risk into policy design and urges interdisciplinary information systems research to inform anticipatory, participatory, and ethically grounded governance.

On the pacing problem, the article points to the swift diffusion of tools like ChatGPT, Italy’s temporary ban over privacy concerns, and the European Union’s accelerated Artificial Intelligence Act negotiations as examples of policy playing catch-up. It highlights reactive rulemaking, knowledge asymmetries between regulators and developers, and shifting risk profiles across contexts as core barriers. Proposed remedies include regulatory sandboxes (as in the United Kingdom), algorithmic audits and transparency mandates (in the European Union), independent evaluations by bodies such as the National Institute of Standards and Technology, and horizon scanning to detect emerging risks early.

The risk framework distinguishes functional, structural, and relational risks. Functional risks cover bias, opacity, and adversarial vulnerabilities, addressed through pre-market certification, bias audits, and explainability requirements. Structural risks involve democratic erosion, labor displacement, and market concentration, prompting measures like sectoral bans and public alternatives. Relational risks arise from power and knowledge asymmetries, such as undisclosed training data and limited user recourse, mitigated through participatory governance and transparency escrows. Responsibility must be clarified across developers, deployers, and regulators, with chain-of-accountability models and designated risk officers encouraged.

Governance sits on a spectrum between self-regulation and sanctions-based enforcement. The editorial outlines the shortcomings of voluntary codes in high-stakes settings and contrasts them with formal penalties like the European Union’s General Data Protection Regulation and the Artificial Intelligence Act’s tiered fines and conformity assessments. It details the United Kingdom’s government-led, pro-innovation, principles-based approach versus the House of Lords’ bill advocating an Artificial Intelligence authority, mandatory audits, and algorithmic passports. The United States continues sectoral oversight supplemented by the 2023 Executive Order, while the article argues for layered models that blend internal controls, third-party audits, and statutory enforcement.

Global divergence complicates compliance. The European Union prioritizes precaution and fundamental rights through ex ante tiers and bans; the United States emphasizes innovation with dispersed agency oversight; China aligns Artificial Intelligence with state priorities and ideological control; and the United Kingdom seeks hybrid flexibility. Harmonization should focus on interoperability rather than uniformity, via aligned risk classifications, auditability, mutual recognition, joint sandboxes, and standards bodies such as ISO and IEEE. The editorial closes with a research agenda spanning dynamic risk assessment, enforcement effectiveness, participatory mechanisms, and transnational coordination, setting the stage for a special issue on policymaking for emerging technology.

55

Impact Score

Creative video content creation trends

Video creation is shifting toward automation, interactivity, and authenticity as Artificial Intelligence streamlines production and audiences favor short, vertical, and immersive formats. Brands that pair new tools with sustainable, data-led practices are better positioned to grow engagement and loyalty.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.