South Korea’s Artificial Intelligence Basic Act takes effect with new obligations for developers and users

South Korea's Act on the Development of Artificial Intelligence and Establishment of Trust has come into force, introducing transparency, safety, and local-representative duties for a wide range of Artificial Intelligence operators, including those outside the country. The law sets up new oversight bodies and a framework for detailed technical rules that will be finalized by the Ministry of Science and Information and Communication Technology.

South Korea’s Act on the Development of Artificial Intelligence and Establishment of Trust, also called the Artificial Intelligence Basic Act, took effect on January 22, 2026 and joins the European Union Artificial Intelligence Act as a comprehensive regulatory regime. The law applies to both businesses that develop and provide Artificial Intelligence, described as “Artificial Intelligence development business operators,” and businesses that provide products or services incorporating Artificial Intelligence, described as “Artificial Intelligence utilization business operators.” It defines Artificial Intelligence broadly as an electronic implementation of human intellectual abilities, such as learning, reasoning, perception, decision-making and language comprehension, and establishes high-level rules for transparency, high-risk systems and enforcement, while leaving technical details to forthcoming enforcement decrees from the Ministry of Science and Information and Communication Technology.

The Artificial Intelligence Basic Act sets distinct requirements for generative Artificial Intelligence and high-impact Artificial Intelligence, and introduces an additional category of “high-performance” Artificial Intelligence tied to compute use. Generative Artificial Intelligence is defined as Artificial Intelligence that mimics input data’s structure and features to produce outputs such as text, images, sound and video, while high-impact Artificial Intelligence is defined as Artificial Intelligence that significantly affects human life, safety or fundamental rights and includes uses in healthcare, energy, transportation, hiring and biometric analysis. Operators that provide Artificial Intelligence-generated sound, image or video that is difficult to distinguish from human-created content must provide clear notice that the content is an output of Artificial Intelligence, and operators of both generative Artificial Intelligence and high-impact Artificial Intelligence are required to notify users in advance if their product or service is developed using Artificial Intelligence, with generative Artificial Intelligence outputs also requiring labels that inform whether content has been produced by generative Artificial Intelligence.

High-impact Artificial Intelligence operators face further obligations, including assessing whether their system qualifies as high-impact Artificial Intelligence before deployment, seeking assessment from the Ministry of Science and Information and Communication Technology if needed, providing a “meaningful explanation” of outcomes, key criteria, principles and a summary of training data, creating and deploying a user protection plan, implementing mechanisms for human intervention and supervision, and documenting actions taken to secure trust and safety. They must also make efforts to assess impacts on fundamental rights through impact assessments before incorporating the Artificial Intelligence into products or services. For high-performance Artificial Intelligence, a legislative notice states that Artificial Intelligence systems trained with a cumulative compute of at least 10²⁶ floating-point operations (FLOPs) are designated as high-performance Artificial Intelligence with associated safety obligations, and operators of such systems may be required to implement a risk management plan and user protection measures spanning the system’s life cycle and to report implementation outcomes to the Ministry of Science and Information and Communication Technology.

The law has explicit extraterritorial reach and applies to Artificial Intelligence systems outside South Korea so long as they affect users or markets in the country, with an exception for Artificial Intelligence systems for national security or those designated by presidential decree. Any foreign Artificial Intelligence business without a physical office in Korea that has total revenue exceeding one trillion KRW in the previous year, revenue from Artificial Intelligence services exceeding 10 billion KRW in the previous year, or average daily users in Korea exceeding one million users during the three months preceding the end of the previous year must designate a local agent, who is legally responsible for responding to government inquiries and safety reports. The Ministry of Science and Information and Communication Technology can issue corrective orders, including service suspension if there is a safety threat, and can impose administrative fines of up to 30 million KRW (about US$21,000) for failing to notify users about the use of Artificial Intelligence, failing to appoint a domestic representative, or violating corrective orders or refusing government inspections, although the ministry has indicated a grace period of one year before administrative fines are imposed.

Beyond compliance duties, the Artificial Intelligence Basic Act sets up a governance and promotion framework intended to support development of a trusted Artificial Intelligence ecosystem. It calls for the creation of a national Artificial Intelligence committee as a control tower chaired by the president to oversee national policy, an Artificial Intelligence policy center to manage strategic and intellectual development of South Korea’s Artificial Intelligence industry and international cooperation, and an Artificial Intelligence safety research institute tasked with evaluating Artificial Intelligence risk and developing standards. The law also mandates government support for research and development, data centers, small and medium businesses and entrepreneurship. Companies doing business in South Korea are encouraged to review how they use Artificial Intelligence in products and services and to prepare for operationalizing a risk-based compliance framework that aligns with the new requirements as additional implementing regulations and guidelines emerge.

68

Impact Score

How global R&D spending growth has shifted since 2000

Global research and development spending has nearly tripled since 2000, with China and a group of emerging economies driving the fastest growth. Slower but still substantial expansion in mature economies highlights a world that is becoming more research intensive overall.

Finance artificial intelligence compliance in European financial services

The article explains how financial firms can use artificial intelligence tools while meeting European, United Kingdom, Irish and United States regulatory expectations, focusing on risk, transparency and governance. It details the European Union artificial intelligence act, the role of cybersecurity, and the standards and practices that support compliant deployment across the financial sector.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.