Generative artificial intelligence, defined as systems that automatically create text and images, is rapidly being integrated into business operations to improve efficiency and enable new services. At the same time, concerns have grown around misinformation, copyright infringement, misuse of personal data, and discriminatory outputs. In response, major jurisdictions are accelerating efforts to introduce regulations and practical guidelines that govern the development and use of generative artificial intelligence, creating a complex and evolving compliance environment for companies operating across borders.
In the European Union, the artificial intelligence act entered into force on August 1, 2024, with a phased application schedule that includes prohibited practices and artificial intelligence literacy obligations that are applicable from February 2, 2025, and obligations related to general purpose artificial intelligence that are applicable from August 2, 2025. Generative artificial intelligence systems are expected to fall within the scope of general purpose artificial intelligence, which will trigger requirements around transparency, risk management systems, and explanations of training data. Businesses linked to the EU market are urged to review contracts and internal rules in anticipation of these obligations and to prepare for heightened scrutiny of how generative artificial intelligence tools are deployed.
Japan is pursuing a different path by favoring governance through flexible guidelines rather than a single comprehensive law. The “artificial intelligence guidelines for business (Ver.1.1)” issued by the ministry of economy, trade and industry and the ministry of internal affairs and communications are intended to be continuously updated as technology and use cases evolve. Companies in Japan are encouraged to formalize internal rules on what information can be input into artificial intelligence systems, how outputs are to be verified, how copyright and citation are handled, and what contractual safeguards are required with external vendors. In the United States, federal level discussions are ongoing while states move ahead with their own rules, including New York State, which in December 2025 signed legislation requiring certain developers of “frontier models” to disclose safety protocols and report major incidents within 72 hours. The national institute of standards and technology supports implementation through the “artificial intelligence RMF generative artificial intelligence profile (AI 600-1)” as a structured risk management framework. The United Kingdom, instead of a single artificial intelligence law, promotes safe and responsible use through practical guidance such as the “artificial intelligence playbook.”
Across the EU, Japan, the United States, and the United Kingdom, regulatory strategies diverge in both philosophy and implementation, ranging from binding acts to guideline-based governance and standards driven approaches. These differences make it harder for companies to determine which jurisdiction’s rules should serve as their primary compliance baseline and which emerging issues could become material risks. Multinational organizations are encouraged to monitor regulatory developments closely, align their internal rules and contracts with the strictest applicable standards where practical, and seek expert advice before making significant decisions related to generative artificial intelligence deployment.
