South Korea’s New AI Framework Act: A Balancing Act Between Innovation and Regulation
On 21 January 2025, South Korea became the first jurisdiction in the Asia-Pacific (APAC) region to adopt comprehensive artificial intelligence (AI) legislation. Taking effect on 22 January 2026, the Framework Act on Artificial Intelligence Development and Establishment of a Foundation for Trustworthiness (AI Framework Act or simply, Act) introduces specific obligations for “high-impact” AI systems in critical sectors, including healthcare, energy, and public services, and mandatory labeling requirements for certain applications of generative AI. The Act also includes substantial public support for private sector AI development and innovation through its support for AI data centers, as well as projects that create and provide access to training data, and encouragement of technological standardization to support SMEs and start-ups in fostering AI innovation.
In the broader context of public policies in South Korea that are designed to allow the advancement of AI, the Act is notable for its layered, transparency-focused approach to regulation, moderate enforcement approach compared to the EU AI Act, and significant public support intended to foster AI innovation and development. We cover these in Parts 2 to 4 below.
Key features of the law include:
- Broad extraterritorial reach, applying to AI activities impacting South Korea’s domestic market or users;
- Government support for AI development through infrastructure (AI data centers) and learning resources;
- Focused oversight of “high-impact” AI systems in critical sectors like healthcare, energy, and public services; providers of most AI systems, including all those that are not high-impact, are not regulated. The Act provides express carve-outs for AI used in security or national defense;
- Transparency obligations for providers of generative AI products and services, including mandatory labeling of AI-generated content, and
- A moderate enforcement approach with administrative fines up to KRW 30 million (approximately USD 21,000).
In Part 5, we provide a comparison below to the European Union (EU)’s AI Act (EU AI Act). We note that while the AI Framework Act shares some common elements with the EU AI Act, including tiered classification and transparency mandates, South Korea’s regulatory approach differs in its simplified risk categorization, including absence of prohibited AI practices, comparatively lower financial penalties, and the establishment of initiatives and government bodies aimed at promoting the development and use of AI technologies. The intent of this comparison is to assist practitioners in understanding and analyzing key commonalities and differences between both laws.
Finally, Part 6 of this article places the Act within South Korea’s broader AI innovation strategy and discusses the challenges of regulatory alignment between the Ministry of Science and IT (MSIT) and South Korea’s data protection authority, the Personal Information Protection Commission (PIPC) in South Korea’s evolving AI governance landscape.
1. Background
On 26 December 2024, South Korea’s National Assembly passed the Framework Act on Artificial Intelligence Development and Establishment of a Foundation for Trustworthiness (AI Framework Act or Act).
The AI Framework Act was officially promulgated on 21 January 2025 and will take effect on 22 January 2026, following a one-year transition period to prepare for compliance. During this period, MSIT will assist with the issuance of Presidential Decrees and other sub-regulations and guidelines to clarify implementation details.
South Korea was the first country in the Asia-Pacific region to introduce a comprehensive AI law in 2021: the Bill on Fostering Artificial Intelligence and Creating a Foundation of Trust. However, the legislative process faced significant hurdles, including political uncertainty surrounding the April 2024 general elections, raising concerns that the bill could be scrapped entirely.
However, by November 2024, South Korea’s AI policy landscape had grown increasingly complex, with 20 separate AI governance bills since the National Assembly began its new term in June 2024, each independently proposed by different members. In November 2024, the Information and Communication Broadcasting Bill Review Subcommittee conducted a comprehensive review of these AI-related bills and consolidated them into a single framework, leading to the passage of the AI Framework Act.
At its core, the AI Framework Act adopts a risk-based approach to AI regulation. In particular, it introduces specific obligations for high-impact AI systems and generative AI applications. The AI Framework Act also has extraterritorial reach: it applies to AI activities that impact South Korea’s domestic market or users.
This blog post examines the key provisions of the Act, including its scope, regulatory requirements, and implications for organizations developing or deploying AI systems.
2. The Act establishes a layered approach to AI regulation
2.1 Definitions lay the foundation for how different AI systems will be regulated under the Act
Article 2 of the Act provides three AI-related definitions.
- First, AI is defined as “an electronic implementation of human intellectual abilities such as learning, reasoning, perception, judgment and language comprehension.”
- Second, AI systems are defined as “an artificial intelligence-based system that infers results such as predictions, recommendations and decisions that affect real and virtual environments for a given goal with various levels of autonomy and adaptability.”
- Third, AI technology is defined as “hardware, software technology, or utilization technology necessary to implement artificial intelligence.”
At the core of the Act’s layered approach is its definition of “high-impact AI” (which is subject to more stringent requirements). “High-impact AI” refers to AI systems “that may have a significant impact on or pose a risk to human life, physical safety, and basic rights,” and is utilized in critical sectors identified under the AI Framework Act, including energy, healthcare, nuclear operations, biometric data analysis, public decision-making, education, or other areas that have a significant impact on the safety of human life and body and the protection of basic rights as prescribed by Presidential Decree.
The Act also introduces specific provisions for “generative AI.” The Act defines generative AI as AI systems that create text, sounds, images, videos, or other outputs by imitating the structure and characteristics of the input data.
The Act also defines an “AI Business Operator” as corporations, organizations, government agencies, or individuals conducting business related to the AI industry. The Act subdivides AI Business Operators into two sub-categories (which effectively reflect a developer-deployer distinction):
- “AI Development Business Operators” that develop and provide AI systems, and
- “AI Utilization Business Operators” that offer products or services using AI developed by AI Development Business Operators.
Currently, as will be covered in more detail below, the obligations under the Act apply to both categories of AI Business Operators, regardless of their specific roles in the AI lifecycle. For example, transparency-related obligations apply to all AI Business Operators, regardless of whether they are involved in the development and/or deployment phases of AI systems. It remains to be seen if forthcoming Presidential Decrees to implement the Act will introduce more differentiated obligations for each type of entity.
While the Act expressly excludes AI used solely for national defense and security from its scope, the Act applies to both government agencies and public bodies when they are involved in the development, provision, or use of AI technology in a business-related context. More broadly, the Act also assigns the government a significant role in shaping AI policy, providing support, and overseeing the development and use of AI.
2.2. The AI Framework Act has broad extraterritorial reach
Under Article 4(1), the Act applies not only to acts conducted within South Korea but also to those conducted abroad that impact South Korea’s domestic market, or users in South Korea. This means that foreign companies providing AI systems or services to users in South Korea will be subject to the Act’s requirements, even if they lack a physical presence in the country.
However, Article 4(2) of the Act introduces a notable exemption for AI systems developed and deployed exclusively for national defense or security purposes. These systems, which will be designated by Presidential Decree, fall outside the Act’s regulatory framework.
For global organizations, the Act’s jurisdictional scope raises key compliance considerations. Companies will likely need to assess whether their AI activities fall under South Korea’s regulatory reach, particularly if they:
- Offer AI-powered services to South Korean users;
- Process data or make algorithmic decisions affecting South Korean businesses or individuals; or
- Indirectly impact the Korean market through AI-driven analytics or decision-making.
This last criterion appears to be a novel policy proposition and differentiates the AI Framework Act from the EU AI Act, potentially making it broader in reach. This is because it does not seem necessary for an AI system to be placed on the South Korean market for the condition to be triggered, but simply for the AI-related activity of a covered entity to “indirectly impact” the South Korean market.
2.3. The Act establishes a multi-layered approach to AI safety and trustworthiness requirements
(i) The Act emphasizes oversight of high-impact AI but does not prohibit particular AI uses
For most AI Business Operators, compliance obligations under the AI Framework Act are minimal. There are, however, noteworthy obligations – relating to transparency, safety, risk management and accountability – that apply to AI Business Operators deploying high-impact AI systems.
Under Article 33, AI Business Operators providing AI products and services must “review in advance” (this presumably means before the relevant product or service is released into a live environment or goes to market) whether their AI systems is considered “high-impact AI.” Businesses may request confirmation from the MSIT on whether their AI system is to be considered “high-impact AI.”
Under Article 34, organizations that offer high-impact AI, or products or services using high-impact AI, must meet much stricter requirements, including:
1. Establishing and operating a risk management plan.
2. Establishing and operating a plan to provide explanation for AI-generated results within technical limits, including key decision criteria and an overview of training data.
3. Establishing and operating “user protection measures.”
4. Ensuring human oversight and supervision of high-impact AI.
5. Preserving and storing documents that demonstrate measures taken to ensure AI safety and reliability.
6. Following any additional requirements imposed by the National AI Committee (established under the Act) to enhance AI safety and 7. reliability.
Under Article 35, AI Business Operators are also encouraged to conduct impact assessments for high-impact AI systems to evaluate their potential effects on fundamental rights. While the language of the Act (i.e., “shall endeavor to conduct an impact assessment”) suggests that these assessments are not mandatory, the Act introduces an incentive: where a government agency intends to use a product or service using high-impact AI, the agency is to prioritize AI products or services that have undergone impact assessments in public procurement decisions. Legislatively stipulating the use of public procurement processes to incentivize businesses to conduct impact assessments appears to be a relatively novel move and arguably reflects the innovation-risk duality seen across the Act.
(ii) The Act prioritizes user awareness and transparency for generative AI products and services
The AI Framework Act introduces specific transparency obligations for generative AI providers. Under Article 31(1), AI Business Operators offering high-impact or generative AI-powered products or services must notify users in advance that the product or service utilizes AI. Further, under Article 31(2), AI Business Operators providing generative AI as a product or service must also indicate that output generated was generated by generative AI.
Beyond general disclosure, Article 31(3) of the Act mandates that where an AI Business Operator uses an AI system to provide virtual sounds, images, video or other content that are “difficult to distinguish from reality,” the AI Business Operator must “notify or display the fact that the result was generated by an (AI) system in a manner that allows users to clearly recognize it.”
However, the provision also provides flexibility for artistic and creative expressions. It permits notifications or labelling to be displayed in ways intended to not hinder creative expression or appreciation. This approach appears aimed at balancing the creative utility of generative AI with transparency requirements. Technical details, such as how notification or labelling should be implemented, will be prescribed by Presidential Decree.
(iii) The Act establishes other requirements that apply when certain thresholds are met
The following requirements focus on safety measures and operational oversight, including specific provisions for foreign AI providers.
Under Article 32, AI Business Operators that operate AI systems whose computational learning capacity exceeds prescribed thresholds are required to identify, assess, and mitigate risks throughout the AI lifecycle, and establish a risk management system to monitor and respond to AI-related safety incidents. AI Business Operators must document and submit their findings to the MSIT.
For accountability, Article 36 provides that AI Business Operators without a domestic address or place of business and cross certain user number or revenue thresholds (to be prescribed) must appoint a “domestic representative” with an address or place of business in South Korea. The details of the domestic representative must be provided to the MSIT.
These domestic representatives take on significant responsibilities, including:
- Submitting safety measure implementation results;
- Managing high-impact AI confirmation processes; and
- Supporting the implementation of safety and trustworthiness measures.
3. The Act grants the MSIT significant investigative and enforcement powers
3.1 The legislation empowers the MSIT with broad authority to investigate potential violations of the Act
Under Article 40 of the Act, the MSIT is empowered to investigate businesses that it suspects of breaching any of the following requirements under the Act:
- Notification and labeling requirements for generative AI outputs;
- Implementation of safety measures and submission of compliance results for AI systems exceeding computational thresholds set by Presidential Decree, and
- Adherence to safety and reliability standards for high-impact AI systems.
When potential breaches are identified, the MSIT may carry out necessary investigations, including the authority to conduct on-site investigations and to compel AI Business Operators to submit relevant data. During these inspections, authorized officials can examine business records, operational documents, and other critical materials, following established administrative investigation protocols.
If violations are confirmed, the MSIT can issue corrective orders, requiring businesses to immediately halt non-compliant practices and implement necessary remediation measures.
3.2 The Act takes a relatively moderate approach to penalties compared to other global AI regulations
Under Articles 43 of the Act, administrative fines of up to KRW 30 million (approximately USD 20,707) may be imposed for:
- Failure to comply with corrective or cease-and-desist orders issued by the MSIT.
- Non-fulfillment of notification obligations related to high-impact AI or generative AI systems.
- Failure to designate a required domestic representative, as mandated for certain foreign AI providers operating in South Korea.
This enforcement structure caps fines at lower amounts than other global AI regulations.
4. The Act promotes the development of AI technologies through strategic support for data infrastructure and learning resources
The MSIT is responsible for developing comprehensive policies to support the entire lifecycle of AI training data, ensuring that businesses have access to high-quality datasets essential for AI development. To achieve this, the Act mandates government-led initiatives to:
- Support the production, collection, management, distribution, and utilization of AI training data.
- Select and fund projects that generate and provide training data.
- Establish an integrated system for managing and providing AI training data to the private sector.
A key initiative under the Act can be found in Article 25, which provides for the promotion of policies to establish and operate AI Data Centers. Under Article 25(2), the South Korean government may provide administrative and financial support to facilitate the construction and operation of data centers. These centers will provide infrastructure for AI model training and development, ensuring that businesses of all sizes – including small and medium-sized enterprises (SMEs) – have access to these resources.
The Act also promotes the advancement and safe use of AI by encouraging technological standardization (Articles 13 and 14), supporting SMEs and start-ups, and fostering AI-driven innovation. It also facilitates international collaboration and market expansion while establishing a framework for AI testing and verification (Articles 13 and 14). Together, these measures aim to strengthen South Korea’s broader AI ecosystem and ensure its responsible development and deployment.
5. Comparing the approaches of South Korea’s AI Framework Act and the EU’s AI Act reveals both convergences and divergences
As South Korea is only the second jurisdiction globally to enact comprehensive national AI regulation, comparing its AI Framework Act with the EU AI Act helps illuminate both its distinctive features and its place in the emerging landscape of global AI governance. As many companies will need to navigate both frameworks, understanding of their similarities and differences is essential for global compliance strategies.
Table 1. Comparison of Key Aspects of the South Korea AI Framework Act and EU AI Act
6. Looking ahead
South Korea’s AI Framework Act is the first omnibus AI regulation in the APAC region., The South Korean model is notable for establishing an alternative approach to AI regulation: one that seeks to balance the promotion of AI innovation, development, and use, along with safeguards for high-impact aspects.
6.1 Though the Act establishes a framework for direct regulation of AI, several critical areas require further definition through Presidential Decree.
The areas that are expected to be clarified through Presidential Decree include:
- Thresholds for computational capacity, which determine when AI systems face additional obligations;
- Revenue and user criteria that trigger domestic representative requirements for foreign AI Business Operators; and
- Detailed criteria for identifying high-impact AI systems, ensuring consistent risk-based regulation.
The interpretation and implementation of these provisions will significantly shape compliance expectations, influencing how AI businesses—both domestic and international—navigate the regulatory landscape.
6.2 The Act must also be considered in the context of South Korea’s broader efforts to position the country as a leader in AI innovation
The first – and arguably most significant – of these efforts is a significant bill recently introduced by members of the National Assembly, which seeks to amend the Personal Information Protection Act (PIPA) by creating a new legal basis for the processing of personal information specifically for the development and use of AI. The bill introduces a new Article 28-12, which would permit the use of personal information beyond its original purpose of collection, specifically for the development and improvement of AI systems. This amendment would allow such processing provided that:
- The nature of the data is such that anonymizing or pseudonymizing it would make it difficult to use in AI development;
- Appropriate technical, administrative, and physical safeguards are implemented;
- The purpose of AI development aligns with objectives such as promoting public interest, protecting individuals or third parties, or fostering AI innovation;
- There is minimal risk of harm to data subjects or third parties, and
- The PIPC has confirmed that each of the above requirements has been met (note that the PIPC may also attach further conditions, if necessary).
Second, South Korea’s government is also reportedly exploring other legal reforms to its data protection law to facilitate the development of AI. According to PIPC Chairman Haksoo Ko’s recent interview with a global regulatory news outlet, these reforms could potentially include reforming the “legitimate interests” basis for processing personal information under the PIPA.
South Korea’s Minister for Science and ICT Yoo Sang-im has also reportedly urged the National Assembly to swiftly pass a law on the management and use of government-funded research data to advance scientific and technological development in the AI era.
Third, while creating these pathways for innovation, the PIPC has simultaneously been developing mechanisms to provide oversight over AI systems. For instance, the PIPC’s comprehensive policy roadmap for 2025 (Policy Roadmap) announced in January 2025 outlines an ambitious regulatory framework for AI governance and data protection. In particular, the Policy Roadmap envisions the implementation of specialized regulatory and oversight provisions for the use of unmodified personal data in AI development.
The Policy Roadmap is supplemented by the PIPC’s Work Direction for Investigations in 2025 (Work Direction). Published in January 2025, the Work Direction includes measures intended to provide additional oversight over AI services, including conducting preliminary onsite inspections of AI-powered services, such as AI agents, and reviewing the use of personal information in AI-based legal and human resources services.
A possible instance of this additional emphasis on providing oversight arose in February 2025, when the PIPC announced a temporary suspension of new downloads of the Chinese generative AI application Deepseek over concerns about potential breaches of the PIPA.
Fourth, South Korea is seeking to strengthen the accountability of foreign organizations. The PIPC has expressed its support for a bill amending the PIPA’s domestic representative system for foreign organizations, which was subsequently amended and became effective from April 1, 2025. This amendment bill addresses a significant gap in the current system, which has allowed foreign companies to designate unrelated third parties as their domestic agents in South Korea, often resulting in what one lawmaker described as “formal” compliance without meaningful accountability.
The new requirements would mandate that foreign companies with established business units in South Korea designate those local entities as their representatives, while imposing explicit obligations on foreign headquarters to properly manage and supervise these domestic agents. The bill also establishes sanctions for violations of these requirements, including fines of up to KRW 20 million (approximately USD 14,000).
Fifth, South Korea is seeking to position itself as a global leader in privacy and AI governance through international cooperation and thought leadership. As South Korea prepares to host the annual Global Privacy Assembly in September 2025 – an event involving participants from 95 countries – the PIPC is positioning itself as a bridge between different regional approaches to data protection and AI governance.
6.3 However, these efforts highlight a persistent challenge to ensure clear alignment between key regulatory authorities in South Korea’s AI governance landscape
Whilst the MSIT was working to finalize the AI Framework Act, the PIPC, like its counterparts in many other jurisdictions globally, has been assuming a de facto regulatory role for AI applications involving personal data.
However, while the AI Framework Act assigns primary responsibility for AI governance to the MSIT, it does not appear to address or acknowledge the PIPC’s role in the regulatory landscape. This creates a potential situation where two parallel AI regulators – one de jure and the other de facto – will likely continue to operate: the MSIT overseeing general AI system safety and trustworthiness under the AI Framework Act, and the PIPC maintaining its oversight of personal data processing in AI systems under the PIPA.
As a result, organizations developing or deploying AI systems in South Korea may need to navigate compliance requirements from both authorities, particularly when their AI systems process personal data. How this dual regulatory structure evolves and whether a more unified governance approach emerges will be a critical factor in determining the success of South Korea’s ambitious AI strategy in the coming years.
Despite these practical challenges, South Korea’s approach to AI regulation offers a potential governance model for other APAC jurisdictions. Regardless, the success of the Act will ultimately depend on how effectively it balances its dual objectives — fostering AI innovation while ensuring responsible deployment. As AI governance evolves globally, the South Korean experience will provide valuable insights for policymakers, regulators, and industry stakeholders worldwide.
Note: Please note that the summary of the AI Framework Act above is based on an English machine translation, which may contain inaccuracies. Additionally, the information should not be considered legal advice. For specific legal guidance, kindly consult a qualified lawyer practicing in South Korea.
The authors would like to thank Josh Lee Kok Thong, Dominic Paulger, and Vincenzo Tiani for their contributions to this post.