Understanding Japan’s AI Promotion Act: An “Innovation-First” Blueprint for AI Regulation
The global landscape of artificial intelligence (AI) is being reshaped not only by rapid technological advancement but also by a worldwide push to establish new regulatory regimes. In a landmark move, on May 28, 2025, Japan’s Parliament approved the “Act on the Promotion of Research and Development and the Utilization of AI-Related Technologies” (人工知能関連技術の研究開発及び活用の推進に関する法律案要綱) (AI Promotion Act, or Act), making Japan the second major economy in the Asia-Pacific (APAC) region to enact comprehensive AI legislation. Most provisions of the Act (except Chapters 3 and 4, and Articles 3 and 4 of its Supplementary Provisions) took effect on June 4, 2025, marking a significant transition from Japan’s soft-law, guideline-based approach to AI governance to a formal legislative framework.
This blog post provides an in-depth analysis of Japan’s AI Promotion Act, its strategic objectives, and unique regulatory philosophy. It further develops on our earlier analysis of the Act (during its draft stage), available exclusively for FPF Members in our FPF Members Portal. The post begins by exploring the Act’s core provisions in detail, before placing the Act in a global context by drawing detailed comparisons between the Act and two other pioneering omnibus AI regulations: (1) the European Union (EU)’s AI Act, and South Korea’s Framework Act on AI Development and Establishment of a Foundation for Trustworthiness (AI Framework Act). This comparative analysis of these three regulations reveals three distinct models for AI governance, creating a complex compliance matrix that companies operating in the APAC region will need to navigate going forward.
Part 1: Key Provisions and Structure of the AI Promotion Act
The AI Promotion Act establishes policy drivers to make Japan the world’s “most AI-friendly country”
The Act’s primary purpose is to establish foundational principles for policies that promote the research, development, and utilization of AI in Japan to foster socio-economic growth.
The Act implements the Japanese government’s ambition, outlined in a 2024 whitepaper, to make Japan the world’s “most AI-friendly country.” The Act is specifically designed to create an environment that encourages investment and experimentation by deliberately avoiding the imposition of stringent rules or penalties that could stifle development.
This initiative is a direct response to low rates of AI adoption and investment in Japan. A summary of the AI Promotion Act from Japan’s Cabinet office highlights that from 2023 to 2024, private AI investment in Japan was a fraction of that seen in other major markets globally (such as the United States, China, and the United Kingdom), with Stanford University’s AI Index Report 2024 putting Japan in 12th place globally for this metric. The Act is, therefore, a strategic intervention intended to reverse these trends by signaling strong government support and creating a predictable, pro-innovation legal environment.
The AI Promotion Act is structured as a “fundamental law” (基本法), establishing high-level principles and national policy direction rather than detailed, prescriptive rules for private actors.
While introducing a basis for binding AI regulation, the Act also builds on Japan’s longstanding “soft law” approach to AI governance, relying on non-binding government guidelines (such as the 2022 “Governance Guidelines for the Implementation of AI Principles” and 2024 “AI Business Operator Guidelines”), multi-stakeholder cooperation, and the promotion of voluntary business initiatives over “hard law” regulation. The Act’s architecture therefore embodies the Japanese Government’s broader philosophy of “agile governance” in digital regulation, which posits that in rapidly evolving fields like AI, rigid, ex-ante regulations are likely to quickly become obsolete and may hinder innovation.
The AI Promotion Act adopts a broad, functional definition of “AI-related technologies.”
The primary goal of the AI Promotion Act (Article 1) is to establish the foundational principles for policies that promote the research, development, and utilization of “AI-related technologies” in Japan. This term refers to technologies that replicate human intellectual capabilities like cognition, inference, and judgment through artificial means, as well as the systems that use them. This non-technical definition appears to be designed for flexibility and longevity. Notably, the law proposes a unique approach to defining the scope of covered AI technologies and does not adopt the OECD definition of an AI system which served as inspiration for that in the EU AI Act.
The Act provides a legal basis for five fundamental principles to guide AI governance in Japan
Under Article 3 of the Act, these principles include:
- Alignment: AI development and use should align with existing national frameworks, including the Basic Act on Science, Technology and Innovation (科学技術・イノベーション基本法), and the Basic Act on Forming a Digital Society (デジタル社会形成基本法).
- Promotion: AI should be promoted as a foundational technology for Japan’s economic and social development, with consideration for national security.
- Comprehensive advancement: AI promotion should be systematic and interconnected across all stages, from basic research to practical application.
- Transparency: Transparency in AI development and use is necessary to prevent misuse and the infringement of citizens’ rights and interests.
- International leadership: Japan should actively participate in and lead the formulation of international AI norms and promote international cooperation.
The AI Promotion Act adopts a whole-of-society approach to promoting AI-related technologies
Broadly, the Act assigns high-level responsibilities to five groups of stakeholders:
- The National Government bears the primary responsibility for formulating and implementing comprehensive AI policies. It is mandated to use AI to improve its own administrative efficiency, strengthen stakeholder collaboration, and take all necessary legislative and financial measures to promote AI.
- Local Governments are responsible for formulating and implementing independent AI policies tailored to their local contexts in cooperation with the national government.
- Research and Development (R&D) Institutes are expected to actively engage in AI research, disseminate findings, foster talent, and cooperate with government policies.
- Business Operators (defined in the Act as individuals or organizations planning to develop, offer, or incorporate artificial intelligence technologies into their products, services, or business operations) are encouraged to actively utilize AI to improve efficiency and innovate, and to cooperate with government policies.
- Citizens are expected to deepen their understanding of AI and cooperate with government policies.
To fulfill its responsibilities, the National Government is mandated to take several Basic Measures, including:
- promoting R&D for practical applications;
- developing and promoting shared access to essential infrastructure like computing power and datasets;
- creating guidelines in line with international standards;
- fostering a skilled workforce;
- promoting public education and awareness,
- monitoring AI trends and analyzing cases of rights infringement, and
- promoting international cooperation.
The Act adopts a cooperative approach to governance and enforcement
The Act’s approach to governance and enforcement diverges significantly from overseas legislative frameworks.
The centerpiece of the new governance structure established under the Act is the establishment of a centralized AI Strategy Headquarters within Japan’s Cabinet. Chaired by the Prime Minister and including all other Cabinet ministers as members, this body ensures a whole-of-government, coordinated approach to AI policy.
The AI Strategy Headquarters’ primary mandate is to formulate and drive the implementation of a comprehensive national Basic AI Plan, which will provide more substantive details on the government’s AI strategy.
The AI Promotion Act contains no explicit penalties, financial or otherwise, for non-compliance with its requirements or, more broadly, for misusing AI. Instead, its enforcement power rests on a unique cooperative and reputational model.
- A “Duty to Cooperate”: The sole direct obligation imposed on private sector businesses is to “endeavor to cooperate” with measures implemented by the government. This is a non-binding “reasonable efforts” (努力義務) obligation common in Japanese legislation.
- “Name and Shame” Enforcement: The government is empowered to gather information, analyze cases of rights infringement, and provide guidance or advice to businesses. Japanese news outlets have reported that in cases of significant infringement, it is likely that the government would have the authority to publicly disclose the names of non-compliant businesses. This “name and shame” mechanism leverages the high premium placed on corporate reputation in Japanese business culture. While it may appear toothless compared to the significant fines provided by laws like the EU’s AI Act, the brand damage from being publicly identified as an irresponsible actor may nevertheless serve as a powerful deterrent within the Japanese industry context.
Part 2: A Tale of Three AI Laws – Comparative Analysis of Japan’s AI Promotion Act, the EU’s AI Act, and South Korea’s AI Framework Act
To fully appreciate Japan’s approach, it is useful to compare it with the other two prominent global AI hard law frameworks, the EU AI Act and South Korea’s Framework AI Act.
The EU AI Act is a comprehensive legal framework for AI systems. Officially published on July 12, 2024, it became effective on August 2, 2024, but it is becoming applicable in multiple stages, beginning in January 2025 and trickling down until 2030. Its primary aim is to regulate AI systems placed on the EU market, balancing innovation with ethical considerations and safety. The Act proposes a risk-based approach whereby a few uses of AI systems are prohibited as they are considered to have unacceptable risk to health, safety and fundamental rights; some AI systems are considered “high-risk” and bear most of compliance obligations for their deployers and providers; while others are either low risk, facing mainly transparency obligations, or they are simply outside of the scope of the regulation. The AI Act also has a separate set of rules applying only to General Purpose AI models, with enhanced obligations linked to those that have “systemic risk.” See here for a Primer on the EU AI Act.
South Korea‘s “Framework Act on Artificial Intelligence Development and Establishment of a Foundation for Trustworthiness” (인공지능 발전과 신뢰 기반 조성 등에 관한 기본법), also known as the “AI Framework Act,” was passed on December 26, 2024, and is currently scheduled to take effect on January 22, 2026.
The stated purpose of the AI Framework Act is to protect citizens’ rights and dignity, improve quality of life, and strengthen national competitiveness. The Act aims to promote the AI industry and technology while simultaneously preventing associated risks, reflecting a balancing act between innovation and regulation. For a more detailed analysis of South Korea’s AI Framework Act, you may read FPF’s earlier blog post here.
Like the EU’s AI Act, South Korea’s AI Framework Act adopts a risk-based approach, introducing specific obligations for “high-impact” AI systems utilized in critical sectors such as healthcare, energy, and public services. However, a key difference between the two laws is that South Korea does not include any prohibition of practices or AI systems. It also includes specific provisions for generative AI. Notably, AI systems used solely for national defense or security are expressly excluded from its scope, and most AI systems not classified as “high-impact” are not subject to regulation under the AI Framework Act.
AI Business Operators, encompassing both developers and deployers, are subject to several specific obligations. These include establishing and operating a risk management plan, providing explanations for AI-generated results (within technical limits), implementing user protection measures, and ensuring human oversight for high-impact AI systems. For generative AI, providers are specifically required to notify users that they are interacting with an AI system.
The AI Framework Act establishes a comprehensive governance framework, including a National AI Committee chaired by the President of the country tasked with deliberating on policy, investment, infrastructure, and regulations. The AI Framework Act also establishes other governance institutions, such as the AI Policy Center and AI Safety Research Institute. The Ministry of Science and ICT (MSIT) holds the responsibility for establishing and implementing a Basic AI Plan every three years. The MSIT is also granted significant investigative and enforcement powers, with enforcement measures including corrective orders and fines. The AI Framework Act also includes extraterritorial provisions, extending its reach beyond South Korea.
Commonalities and divergences across jurisdictions
The regulatory philosophies across Japan, South Korea, and the EU present a spectrum of approaches.
- Japan: The primary goal of the AI Promotion Act is to drive economic growth. It acknowledges AI as a foundational technology for Japan’s socioeconomic development and seeks to enhance the country’s competitiveness and efficiency through AI. Its regulatory approach minimizes regulatory burdens, rejecting express penalties in favor of governmental guidance and voluntary cooperation
- EU: The AI Act is precautionary, and its primary goal is to protect fundamental rights, health, and safety. Functioning primarily as a product safety regulation, it imposes strict and detailed ex-ante compliance obligations with severe penalties.
- South Korea: South Korea’s AI Framework Act charts a middle course between promoting innovation to enhance South Korea’s global competitiveness and addressing the potential societal risks posed by the misuse of AI. It, therefore, combines strong promotional measures with targeted, EU-style regulations for “high-impact” AI systems, representing a pragmatic attempt to balance innovation with risk management.
Differences are also evident in scope, risk classification, and enforcement severity. Japan’s AI Promotion Act and South Korea’s AI Framework Act are both foundational laws that allocate responsibilities for AI governance within the government and establish a legal basis for future regulation of AI. However, Japan’s AI Promotion Act does not impose any direct obligations on private actors and does not include a “risk” or “high-impact” classification of AI technologies. By contrast, South Korea’s AI Framework Act imposes a range of obligations on “high-impact” and generative AI, without going so far as to prohibit AI practices. The latter also has specific carve-outs for national defense, similar to how the EU AI Act excludes AI systems for military and national security purposes from its scope.
The EU AI Act has the broadest and most detailed scope, categorizing all AI systems into four risk levels, with strict requirements for high-risk and outright prohibitions for unacceptable risk systems, in addition to specific obligations for General Purpose AI (GPAI) models.
In terms of enforcement powers, Japan’s AI Promotion Law notably lacks any penalties for noncompliance or misuse of AI more broadly. South Korea’s AI Framework Act, by contrast, has enforcement powers, including fines and corrective orders, but its financial penalties are comparatively lower than those in the EU’s AI Act. For instance, the maximum fine under South Korea’s AI Framework Act is set at KRW 30 million (approximately USD 21,000), whereas, under the EU AI Act, fines can range from EUR 7.5 million to EUR 35 million (approximately. USD 7.8 million to USD 36.5 million), or 1% to 7% of the company’s global turnover.
Despite these divergences, there are some commonalities. All three laws establish central governmental bodies (Japan’s AI Strategy Headquarters, South Korea’s National AI Committee, and the EU’s AI Office/NCAs) to coordinate AI policy and strategy. All three also emphasize international cooperation and participation in norm-setting. Notably, all three frameworks explicitly or implicitly reference the core tenets of transparency, fairness, accountability, safety, and human-centricity, which have been developed in international forums like the OECD and the G7 Hiroshima AI Process.
The divergence is not in the “what” – ensuring the responsible development and deployment of AI – but in the “how.” The EU chooses comprehensive, prescriptive regulation; Japan opts for softer regulation building on existing voluntary guidelines; and South Korea applies targeted regulation to specific high-risk areas. This indicates a global consensus on the desired ethical outcomes for AI, but a deep and consequential divergence on the most effective legal and administrative tools to achieve them.
Access here a detailed Comparative Table of the three AI laws in the EU, South Korea and Japan, comparing them on 11 criteria, from definitions and scope, to risk categorization, enforcement model and support for innovation.
The future of AI regulation: A new regional and global landscape
The distinctly “light-touch” approach to AI regulation in Japan suggests a minimal compliance burden for organizations in the immediate term. However, the AI Promotion Act is arguably the beginning, not the end, of the conversation, as the forthcoming Basic AI Plan has the potential to introduce a wide range of possible initiatives.
Regionally, Japan’s “innovation-first” strategy likely aims to draw investment by offering a less burdensome regulatory environment. The EU, conversely, is attempting to set a high standard for ethical and safe AI, aiming to foster sustainable and trustworthy innovation. South Korea’s middle-ground approach attempts to capture benefits from both strategies.
The availability of a full spectrum of regulatory models on a global scale aimed at the same technology could lead to regulatory arbitrage. It remains to be seen whether companies prioritize development in less regulated jurisdictions to minimize compliance costs, or, conversely, whether there will be a global demand for “EU-compliant” AI as a mark of trustworthiness. This dynamic implies that the future of AI development might be shaped not just by technological breakthroughs but by the attractiveness of regulatory environments as well.
Nevertheless, it is also worth noting that a jurisdiction’s regulatory model alone does not determine its ultimate success in attracting investments or deploying AI effectively. Many other factors, such as the availability of data, compute and talent, as well as the ease of doing business generally, will also be critical.
With two significant jurisdictions in the APAC region having adopted now innovation-oriented AI laws, it appears that the region is starting a trend in innovation-first AI regulation and a contrasting model to the EU AI Act. At the same time, it is notable that both Japan and South Korea have comprehensive national data protection laws, which offer safeguards to people’s rights in all contexts where personal data is being processed, including through AI systems.
Note: Please note that the summary of the AI Promotion Act below is based on an English machine translation, which may contain inaccuracies. Additionally, the information should not be considered legal advice. For specific legal guidance, kindly consult a qualified lawyer practicing in Japan.
The author acknowledges the valuable contributions of the APAC team’s interns, Darren Ang and James Jerin Akash, in assisting with the initial draft of this blog post.