Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > Is Japan’s Approach to AI the Future of Global, Risk-based, Multistakeholder AI Governance?

Of the many courses of action discussed by OpenAI CEO Sam Altman when he testified before Congress earlier this week,  the creation of a global body to manage the governance of AI came up several times.   The International Atomic Energy Agency and  CERN were mentioned as models.   For now, the 2023 G7 Summit meeting in Hiroshima, Japan this weekend is the initial filter we are applying to the potential for global collaboration specific to the risks and opportunities of artificial intelligence. The Center for Strategic and International Studies (CSIS) has done some great work on this topic. 

Background

“How to address AI’s risks while accelerating beneficial innovation and adoption is one of the most difficult challenges for policymakers…”

CSIS’ Hiroki Habuka, a Senior Associate at the Wadhwani Center for AI and Advanced Technologies, begins his analysis with the global efforts to date:

While AI brings dramatic solutions to societal problems, its unpredictable nature, unexplainability, and reflection or amplification of data biases raise various concerns about privacy, security, fairness, and even democracy.
In response, governments, international organizations, and research institutes around the world began publishing a series of principles for human-centric AI in the late 2010s.[1]

What began as broad principles are now transforming into more specific regulations:

  • In 2021, the European Commission published the draft Artificial Intelligence Act, which classifies AI according to four levels and prescribes corresponding obligations, including enhanced security, transparency, and accountability measures.
  • In the United States, the Algorithmic Accountability Act of 2022 was introduced in both houses of Congress in February 2022.
  • In June 2022, Canada proposed the Artificial Intelligence and Data Act (AIDA) in which risk management and information disclosure regarding high-impact AI systems will be made mandatory.

 How to address AI’s risks while accelerating beneficial innovation and adoption is one of the most difficult challenges for policymakers, including Group of Seven (G7) leaders.

Japan’s Approach to AI Regulation and Its Impact on the 2023 G7 Presidency

“The emphasis is on a risk-based, agile, and multistakeholder process, rather than a one-size-fits-all obligation or prohibition.”

During the 2023 G7 summit in Japan, digital ministers are expected to discuss the human-centric approach to AI, which may cover regulatory or nonregulatory policy tools.   As the host country, Japan’s approach to AI regulation may have considerable influence on consensus-building among global leaders.

Japan’s Approach to AI Regulation and Its Impact on the 2023 G7 Presidency analyzes the key trends in Japan’s AI regulation and discusses what arguments could be made at the G7 summit.

To summarize:

  • Japan has developed and revised AI-related regulations with the goal of maximizing AI’s positive impact on society, rather than suppressing it out of overestimated risks.
  • The emphasis is on a risk-based, agile, and multistakeholder process, rather than a one-size-fits-all obligation or prohibition.
  • Japan’s approach provides important insights into global trends in AI regulation.

Japan’s AI Regulations

Basic Principles

In 2019, the Japanese government published the Social Principles of Human-Centric AI (Social Principles) as principles for implementing AI in society. The Social Principles set forth three basic philosophies:  human dignity, diversity and inclusion, and sustainability.

It is important to note that the goal of the Social Principles is not to restrict the use of AI in order to protect these principles but rather to realize them through AI. This corresponds to the structure of the Organization for Economic Cooperation and Development’s (OECD) AI Principles, whose first principle is to achieve “inclusive growth, sustainable development, and well-being” through AI.

To achieve these goals, the Social Principles set forth seven principles surrounding AI:

  1. Human-centric;
  2. Education/literacy;
  3. Privacy protection;
  4. Ensuring security;
  5. Fair competition;
  6. Fairness, accountability, and transparency; and
  7. Innovation.

It should be noted that the principles include not only the protective elements of privacy and security but also the principles that guide the active use of AI, such as education, fair competition, and innovation.

Japan’s AI regulatory policy is based on these Social Principles. Its AI regulations can be classified into two categories:

  1. Regulation on AI: Regulations to manage risks associated with AI.
  2. Regulation for AI: Regulatory reform to promote the implementation of AI.

Summary

On the regulation on AI side, Japan has taken the approach of respecting companies’ voluntary governance and providing nonbinding guidelines to support it, while imposing transparency obligations on some large digital platforms.

On the regulation for AI side, Japan is pursuing regulatory reforms that allow AI to be used for positive social impacts and for achieving regulatory objectives. However, it remains to be seen what kind of AI will actually meet the requirements of the regulation. Consideration should be given in light of global standards, and this is international cooperation is needed on AI regulation.

As outlined below, Japan takes a risk-based and soft-law approach to the regulation of AI while actively advancing legislative reform from the perspective of regulation for AI.

Regulation on AI

  • Japan has no regulations that generally constrain the use of AI. According to the AI Governance in Japan Ver. 1.1 report published by the Ministry of Economy, Trade, and Industry (METI) in July 2021—which comprehensively describes Japan’s AI regulatory policy ( AI Governance Report)—such “legally-binding horizontal requirements for AI systems are deemed unnecessary at the moment.” This is because regulations face difficulties in keeping up with the speed and complexity of AI innovation.
  • A prescriptive, static, and detailed regulation in this context could stifle innovation. Therefore, the METI report concludes that the government should respect companies’ voluntary efforts for AI governance while providing nonbinding guidance to support or guide such efforts. The guidance should be based on multistakeholder dialogue and be continuously updated in a timely manner. This approach is called “agile governance,” which is Japan’s basic approach to digital governance.
  • Looking at sector-specific regulations, none prohibit the use of AI per se but rather require businesses to take appropriate measures and disclose information about risks.  
  • From the viewpoint of fair competition, the Japan Fair Trade Commission analyzed the potential risks of cartel and unfair trade to be conducted by algorithms and concluded that most issues could be covered by the existing Antimonopoly Act.
  • There are some laws that do not directly legislate AI systems but still remain relevant for AI’s development and use. The Act on the Protection of Personal Information (APPI) describes the key mandatory obligations for organizations that collect, use, or transfer personal information. The latest amendment of the APPI, which came into effect in 2022, introduced the concept of pseudonymized personal data.  Since the obligations for handling pseudonymized information are less onerous than those for personal information, this new concept is expected to encourage businesses to use more data for AI development.
  • If an AI causes damage to a third party, the developer or operator of the AI may be liable in tort under civil law if it is negligent. However, it is difficult to determine who is negligent in each situation because AI output is unpredictable and the causes of the output are difficult to identify. 
  • The Product Liability Act reduces the victim’s burden of proof when claiming tort liability, but the act only covers damages arising from tangible objects. Therefore, it may apply to the hardware in which the AI is installed but not to the AI program itself.
  • METI’s Governance Guidelines for Implementation of AI Principles summarizes the action targets for implementing the Social Principles and how to achieve them with specific examples. It explains processes to establish and update an AI governance structure in collaboration with stakeholders according to an agile governance framework.

Guidelines for the Protection and Utilization of Data

 Voluntary Initiatives by Businesses

As governments publish guidance on AI and data governance, some private companies are beginning to take a proactive approach to AI governance:

Tools by Research Institutes

Japanese research institutions also provide various tools to promote AI governance:

  • The National Institute of Advanced Industrial Science and Technology (AIST), administered by METI, provides the Machine Learning Quality Management Guideline, which establishes quality benchmark standards for machine learning-based products or services. It also provides procedural guidance for achieving quality through development process management and system evaluations.
  • The Institute for Future Initiatives at the University of Tokyo developed the Risk Chain Model to structure risk factors for AI and is conducting case studies in cooperation with private companies.

What Next?  Possible Steps for Collaboration

“…it would be beneficial to promote the case sharing and standardization…with a view to achieving interoperability in the future.” 

There is a strong case for countries to consider taking actual steps for international cooperation. Such has already begun in various forums:

These initiatives are still in the roadmap stage and require various processes before they are actually implemented. The following are possible future steps in international collaboration.

A relatively easy step would be the sharing of AI incidents and best practices among different countries. Like regulations in all other areas, AI regulations need to be implemented based on concrete necessity and proportionality, rather than being deduced from abstract concepts. Therefore, sharing actual examples of what risks have been caused by AI in what areas—and what technical, organizational, and social methods have been effective in overcoming them—will be an important decision-making tool for policymakers.

For example, the Global Partnership on Artificial Intelligence (GPAI), a multistakeholder initiative housed at the OECD that aims to bridge the gap between theory and practice on AI, is analyzing best practices for the use of climate change data and the use of privacy enhancement technologies. Japan is serving as chair of GPAI in 2022–2023, contributing to this international development of best practices.

Where such best practices can be generalized, international standards could be the next step. Standards would provide AI service providers with insights on good AI governance practices, clarify regulatory content in Category 1 countries, and serve as a basis for responsibility and social evaluation in Category 2 (and also Category 1) countries.

For example, the abovementioned TTC agreed to advance standardization for:

  1. shared terminologies and taxonomies
  2. tools for trustworthy AI and risk management; and
  3. the monitoring and measuring of AI risks.

A more ambitious attempt would be to achieve cross-border interoperability on AI governance. In other words, a mechanism could be introduced whereby a certification (e.g., security certification, type certification) or process (e.g., AI impact assessment, privacy impact assessment) required under regulation or contract in one country can also be used in another country. Although it is premature to discuss the specifics of interoperability at this time since the AI regulations of each country have not yet been adopted, it would be beneficial to promote the case sharing and standardization described above with a view to achieving interoperability in the future.

Toward Agile AI Governance

“Because of its clear and consistent vision for AI governance and its successful AI regulatory reforms…Japan has a promising position to move the G7 collaboration on good AI governance forward.”

International cooperation in the form of sharing best practices, establishing shared standards, and ensuring future interoperability may appear to be the typical pattern that has been repeated in various fields in the past.

However, some special attention should be paid in the field of AI governance:

  • First, sufficient AI governance cannot be achieved solely through intergovernmental cooperation. Given the technical complexity of AI systems, as well as the magnitude of AI’s impact on human autonomy and economy (both in positive and negative ways), it is important to have multistakeholder collaboration. Stakeholders include not only experts in technology, law, economics, and management but also individuals and communities as the ultimate beneficiaries of AI governance.
  • Second, given the speed of evolution of AI technologies, AI governance methods need to be agile and continuously evaluated and updated. In updating, it is necessary not only to consider existing laws and guidance but also to adjust the structure of the regulatory system itself to meet actual needs, such as the extent to which laws should be provided, and what guidance is needed, to tackle actual problems.
  • The Japanese government has named this multistakeholder and flexible governance process “agile governance” and has positioned it as a fundamental policy for a digitalized society. METI summarizes the overarching concept across three reports published in 20202021, and 2022. Japan’s Governance Guidelines for Implementation of AI Principles and the Guidebook on Corporate Governance for Privacy in Digital Transformation …are also based on this concept. In addition, the Digital Rincho, a comprehensive regulatory reform…has also adopted the agile governance principle as one of its key foundations.

Because of its clear and consistent vision for AI governance and its successful AI regulatory reforms, as well as various initiatives by businesses to create good governance practices and contribute to standard setting, Japan has a promising position to move the G7 collaboration on good AI governance forward. (1)

https://oodaloop.com/archive/2019/02/27/securing-ai-four-areas-to-focus-on-right-now/

https://oodaloop.com/archive/2023/05/08/the-ooda-network-on-the-real-danger-of-ai-innovation-at-exponential-speed-and-scale-and-not-adequately-addressing-ai-governance/

https://oodaloop.com/archive/2023/05/15/openai-ceo-sam-altmans-senate-testifies-on-oversight-of-a-i-rules-for-artificial-intelligence-livestream-tuesday-may-16th-at-10-am-est/

https://oodaloop.com/archive/2023/05/18/openai-hugging-face-and-defcon-31-august-2023-on-red-teaming-large-language-models-and-neural-language-models/

https://oodaloop.com/ooda-original/2023/04/26/the-cybersecurity-implications-of-chatgpt-and-enabling-secure-enterprise-use-of-large-language-models/

Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.