South Korea’s Artificial Intelligence Basic Act takes effect with new obligations for developers and users

South Korea's Act on the Development of Artificial Intelligence and Establishment of Trust has come into force, introducing transparency, safety, and local-representative duties for a wide range of Artificial Intelligence operators, including those outside the country. The law sets up new oversight bodies and a framework for detailed technical rules that will be finalized by the Ministry of Science and Information and Communication Technology.

South Korea’s Act on the Development of Artificial Intelligence and Establishment of Trust, also called the Artificial Intelligence Basic Act, took effect on January 22, 2026 and joins the European Union Artificial Intelligence Act as a comprehensive regulatory regime. The law applies to both businesses that develop and provide Artificial Intelligence, described as “Artificial Intelligence development business operators,” and businesses that provide products or services incorporating Artificial Intelligence, described as “Artificial Intelligence utilization business operators.” It defines Artificial Intelligence broadly as an electronic implementation of human intellectual abilities, such as learning, reasoning, perception, decision-making and language comprehension, and establishes high-level rules for transparency, high-risk systems and enforcement, while leaving technical details to forthcoming enforcement decrees from the Ministry of Science and Information and Communication Technology.

The Artificial Intelligence Basic Act sets distinct requirements for generative Artificial Intelligence and high-impact Artificial Intelligence, and introduces an additional category of “high-performance” Artificial Intelligence tied to compute use. Generative Artificial Intelligence is defined as Artificial Intelligence that mimics input data’s structure and features to produce outputs such as text, images, sound and video, while high-impact Artificial Intelligence is defined as Artificial Intelligence that significantly affects human life, safety or fundamental rights and includes uses in healthcare, energy, transportation, hiring and biometric analysis. Operators that provide Artificial Intelligence-generated sound, image or video that is difficult to distinguish from human-created content must provide clear notice that the content is an output of Artificial Intelligence, and operators of both generative Artificial Intelligence and high-impact Artificial Intelligence are required to notify users in advance if their product or service is developed using Artificial Intelligence, with generative Artificial Intelligence outputs also requiring labels that inform whether content has been produced by generative Artificial Intelligence.

High-impact Artificial Intelligence operators face further obligations, including assessing whether their system qualifies as high-impact Artificial Intelligence before deployment, seeking assessment from the Ministry of Science and Information and Communication Technology if needed, providing a “meaningful explanation” of outcomes, key criteria, principles and a summary of training data, creating and deploying a user protection plan, implementing mechanisms for human intervention and supervision, and documenting actions taken to secure trust and safety. They must also make efforts to assess impacts on fundamental rights through impact assessments before incorporating the Artificial Intelligence into products or services. For high-performance Artificial Intelligence, a legislative notice states that Artificial Intelligence systems trained with a cumulative compute of at least 10²⁶ floating-point operations (FLOPs) are designated as high-performance Artificial Intelligence with associated safety obligations, and operators of such systems may be required to implement a risk management plan and user protection measures spanning the system’s life cycle and to report implementation outcomes to the Ministry of Science and Information and Communication Technology.

The law has explicit extraterritorial reach and applies to Artificial Intelligence systems outside South Korea so long as they affect users or markets in the country, with an exception for Artificial Intelligence systems for national security or those designated by presidential decree. Any foreign Artificial Intelligence business without a physical office in Korea that has total revenue exceeding one trillion KRW in the previous year, revenue from Artificial Intelligence services exceeding 10 billion KRW in the previous year, or average daily users in Korea exceeding one million users during the three months preceding the end of the previous year must designate a local agent, who is legally responsible for responding to government inquiries and safety reports. The Ministry of Science and Information and Communication Technology can issue corrective orders, including service suspension if there is a safety threat, and can impose administrative fines of up to 30 million KRW (about US$21,000) for failing to notify users about the use of Artificial Intelligence, failing to appoint a domestic representative, or violating corrective orders or refusing government inspections, although the ministry has indicated a grace period of one year before administrative fines are imposed.

Beyond compliance duties, the Artificial Intelligence Basic Act sets up a governance and promotion framework intended to support development of a trusted Artificial Intelligence ecosystem. It calls for the creation of a national Artificial Intelligence committee as a control tower chaired by the president to oversee national policy, an Artificial Intelligence policy center to manage strategic and intellectual development of South Korea’s Artificial Intelligence industry and international cooperation, and an Artificial Intelligence safety research institute tasked with evaluating Artificial Intelligence risk and developing standards. The law also mandates government support for research and development, data centers, small and medium businesses and entrepreneurship. Companies doing business in South Korea are encouraged to review how they use Artificial Intelligence in products and services and to prepare for operationalizing a risk-based compliance framework that aligns with the new requirements as additional implementing regulations and guidelines emerge.

68

Impact Score

Indiana launches Artificial Intelligence business portal

Indiana is rolling out IN AI, a statewide portal meant to help employers adopt Artificial Intelligence with practical guidance, workshops and peer support. State leaders and business groups are positioning the effort as a way to raise productivity, wages and job growth while keeping workers at the center.

Goodfire launches model debugging tool for large language models

Goodfire has introduced Silico, a mechanistic interpretability platform designed to let developers inspect and adjust model behavior during development. The company is positioning it as a way to give smaller teams deeper control over open-source models and more trustworthy outputs.

Nvidia launches nemotron 3 nano omni for enterprise agents

Nvidia has introduced Nemotron 3 Nano Omni, a multimodal open model designed to support enterprise agents that reason across vision, speech and language. The launch extends Nvidia’s push beyond hardware into models and services while targeting more efficient agentic workflows.

Intel 18A-P node improves performance and efficiency

Intel plans to present new results for its 18A-P process at the VLSI 2026 Symposium, highlighting gains in performance, power efficiency, and manufacturing predictability. The updated node is positioned as a stronger option for customers seeking 18A density with better operating characteristics.

EA CEO defends broader Artificial Intelligence use in game development

EA CEO Andrew Wilson defended the company’s internal use of Artificial Intelligence after employee claims that the tools were slowing work rather than helping. He framed the technology as an aid for repetitive quality assurance tasks, even as concerns persist over its broader impact on development.

Contact Us

Got questions? Use the form to contact us.

Contact Form

Clicking next sends a verification code to your email. After verifying, you can enter your message.