Global regulatory trends on the use of generative artificial intelligence

Governments in the EU, Japan, the United States, and the United Kingdom are moving quickly to regulate generative artificial intelligence, using a mix of binding laws, guidelines, and standards. Diverging philosophies and timelines are making cross-border compliance planning increasingly complex for companies.

Generative artificial intelligence, defined as systems that automatically create text and images, is rapidly being integrated into business operations to improve efficiency and enable new services. At the same time, concerns have grown around misinformation, copyright infringement, misuse of personal data, and discriminatory outputs. In response, major jurisdictions are accelerating efforts to introduce regulations and practical guidelines that govern the development and use of generative artificial intelligence, creating a complex and evolving compliance environment for companies operating across borders.

In the European Union, the artificial intelligence act entered into force on August 1, 2024, with a phased application schedule that includes prohibited practices and artificial intelligence literacy obligations that are applicable from February 2, 2025, and obligations related to general purpose artificial intelligence that are applicable from August 2, 2025. Generative artificial intelligence systems are expected to fall within the scope of general purpose artificial intelligence, which will trigger requirements around transparency, risk management systems, and explanations of training data. Businesses linked to the EU market are urged to review contracts and internal rules in anticipation of these obligations and to prepare for heightened scrutiny of how generative artificial intelligence tools are deployed.

Japan is pursuing a different path by favoring governance through flexible guidelines rather than a single comprehensive law. The “artificial intelligence guidelines for business (Ver.1.1)” issued by the ministry of economy, trade and industry and the ministry of internal affairs and communications are intended to be continuously updated as technology and use cases evolve. Companies in Japan are encouraged to formalize internal rules on what information can be input into artificial intelligence systems, how outputs are to be verified, how copyright and citation are handled, and what contractual safeguards are required with external vendors. In the United States, federal level discussions are ongoing while states move ahead with their own rules, including New York State, which in December 2025 signed legislation requiring certain developers of “frontier models” to disclose safety protocols and report major incidents within 72 hours. The national institute of standards and technology supports implementation through the “artificial intelligence RMF generative artificial intelligence profile (AI 600-1)” as a structured risk management framework. The United Kingdom, instead of a single artificial intelligence law, promotes safe and responsible use through practical guidance such as the “artificial intelligence playbook.”

Across the EU, Japan, the United States, and the United Kingdom, regulatory strategies diverge in both philosophy and implementation, ranging from binding acts to guideline-based governance and standards driven approaches. These differences make it harder for companies to determine which jurisdiction’s rules should serve as their primary compliance baseline and which emerging issues could become material risks. Multinational organizations are encouraged to monitor regulatory developments closely, align their internal rules and contracts with the strictest applicable standards where practical, and seek expert advice before making significant decisions related to generative artificial intelligence deployment.

78

Impact Score

Port Washington vote challenges Artificial Intelligence data center expansion

Port Washington, Wisconsin, voters approved a measure that gives residents more control over large tax-incentivized development projects tied to the Artificial Intelligence infrastructure boom. The local pushback is emerging as a closely watched test of how communities respond to massive data center expansion.

Anthropic launches managed agents for enterprise development

Anthropic has introduced Claude Managed Agents, a new tool aimed at helping enterprises build and deploy Artificial Intelligence agents more quickly by handling core infrastructure tasks. The release adds to Anthropic’s recent product push as it competes for a fast-growing enterprise market.

Meta launches muse spark for its apps

Meta has introduced Muse Spark, an in-house large language model designed for its products and positioned as the first in a broader Muse family. The model brings multimodal reasoning, coding, shopping, and recommendation features to the Meta Artificial Intelligence app and website, with wider rollout planned.

Microsoft scales back Copilot in Windows 11 apps

Microsoft is pulling back some Copilot branding and interface elements from core Windows 11 apps after sustained user criticism. Notepad and Snipping Tool are among the latest apps to lose the prominent Copilot button as the company repositions some features.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.