Start your day with intelligence. Get The OODA Daily Pulse.
Of the many courses of action discussed by OpenAI CEO Sam Altman when he testified before Congress earlier this week, the creation of a global body to manage the governance of AI came up several times. The International Atomic Energy Agency and CERN were mentioned as models. For now, the 2023 G7 Summit meeting in Hiroshima, Japan this weekend is the initial filter we are applying to the potential for global collaboration specific to the risks and opportunities of artificial intelligence. The Center for Strategic and International Studies (CSIS) has done some great work on this topic.
“How to address AI’s risks while accelerating beneficial innovation and adoption is one of the most difficult challenges for policymakers…”
CSIS’ Hiroki Habuka, a Senior Associate at the Wadhwani Center for AI and Advanced Technologies, begins his analysis with the global efforts to date:
While AI brings dramatic solutions to societal problems, its unpredictable nature, unexplainability, and reflection or amplification of data biases raise various concerns about privacy, security, fairness, and even democracy.
In response, governments, international organizations, and research institutes around the world began publishing a series of principles for human-centric AI in the late 2010s.[1]
What began as broad principles are now transforming into more specific regulations:
How to address AI’s risks while accelerating beneficial innovation and adoption is one of the most difficult challenges for policymakers, including Group of Seven (G7) leaders.
“The emphasis is on a risk-based, agile, and multistakeholder process, rather than a one-size-fits-all obligation or prohibition.”
During the 2023 G7 summit in Japan, digital ministers are expected to discuss the human-centric approach to AI, which may cover regulatory or nonregulatory policy tools. As the host country, Japan’s approach to AI regulation may have considerable influence on consensus-building among global leaders.
Japan’s Approach to AI Regulation and Its Impact on the 2023 G7 Presidency analyzes the key trends in Japan’s AI regulation and discusses what arguments could be made at the G7 summit.
To summarize:
Basic Principles
In 2019, the Japanese government published the Social Principles of Human-Centric AI (Social Principles) as principles for implementing AI in society. The Social Principles set forth three basic philosophies: human dignity, diversity and inclusion, and sustainability.
It is important to note that the goal of the Social Principles is not to restrict the use of AI in order to protect these principles but rather to realize them through AI. This corresponds to the structure of the Organization for Economic Cooperation and Development’s (OECD) AI Principles, whose first principle is to achieve “inclusive growth, sustainable development, and well-being” through AI.
To achieve these goals, the Social Principles set forth seven principles surrounding AI:
It should be noted that the principles include not only the protective elements of privacy and security but also the principles that guide the active use of AI, such as education, fair competition, and innovation.
Japan’s AI regulatory policy is based on these Social Principles. Its AI regulations can be classified into two categories:
On the regulation on AI side, Japan has taken the approach of respecting companies’ voluntary governance and providing nonbinding guidelines to support it, while imposing transparency obligations on some large digital platforms.
On the regulation for AI side, Japan is pursuing regulatory reforms that allow AI to be used for positive social impacts and for achieving regulatory objectives. However, it remains to be seen what kind of AI will actually meet the requirements of the regulation. Consideration should be given in light of global standards, and this is international cooperation is needed on AI regulation.
As outlined below, Japan takes a risk-based and soft-law approach to the regulation of AI while actively advancing legislative reform from the perspective of regulation for AI.
As governments publish guidance on AI and data governance, some private companies are beginning to take a proactive approach to AI governance:
Japanese research institutions also provide various tools to promote AI governance:
“…it would be beneficial to promote the case sharing and standardization…with a view to achieving interoperability in the future.”
There is a strong case for countries to consider taking actual steps for international cooperation. Such has already begun in various forums:
These initiatives are still in the roadmap stage and require various processes before they are actually implemented. The following are possible future steps in international collaboration.
A relatively easy step would be the sharing of AI incidents and best practices among different countries. Like regulations in all other areas, AI regulations need to be implemented based on concrete necessity and proportionality, rather than being deduced from abstract concepts. Therefore, sharing actual examples of what risks have been caused by AI in what areas—and what technical, organizational, and social methods have been effective in overcoming them—will be an important decision-making tool for policymakers.
For example, the Global Partnership on Artificial Intelligence (GPAI), a multistakeholder initiative housed at the OECD that aims to bridge the gap between theory and practice on AI, is analyzing best practices for the use of climate change data and the use of privacy enhancement technologies. Japan is serving as chair of GPAI in 2022–2023, contributing to this international development of best practices.
Where such best practices can be generalized, international standards could be the next step. Standards would provide AI service providers with insights on good AI governance practices, clarify regulatory content in Category 1 countries, and serve as a basis for responsibility and social evaluation in Category 2 (and also Category 1) countries.
For example, the abovementioned TTC agreed to advance standardization for:
A more ambitious attempt would be to achieve cross-border interoperability on AI governance. In other words, a mechanism could be introduced whereby a certification (e.g., security certification, type certification) or process (e.g., AI impact assessment, privacy impact assessment) required under regulation or contract in one country can also be used in another country. Although it is premature to discuss the specifics of interoperability at this time since the AI regulations of each country have not yet been adopted, it would be beneficial to promote the case sharing and standardization described above with a view to achieving interoperability in the future.
Toward Agile AI Governance
“Because of its clear and consistent vision for AI governance and its successful AI regulatory reforms…Japan has a promising position to move the G7 collaboration on good AI governance forward.”
International cooperation in the form of sharing best practices, establishing shared standards, and ensuring future interoperability may appear to be the typical pattern that has been repeated in various fields in the past.
However, some special attention should be paid in the field of AI governance:
Because of its clear and consistent vision for AI governance and its successful AI regulatory reforms, as well as various initiatives by businesses to create good governance practices and contribute to standard setting, Japan has a promising position to move the G7 collaboration on good AI governance forward. (1)
https://oodaloop.com/archive/2019/02/27/securing-ai-four-areas-to-focus-on-right-now/
https://oodaloop.com/archive/2023/05/08/the-ooda-network-on-the-real-danger-of-ai-innovation-at-exponential-speed-and-scale-and-not-adequately-addressing-ai-governance/
https://oodaloop.com/archive/2023/05/15/openai-ceo-sam-altmans-senate-testifies-on-oversight-of-a-i-rules-for-artificial-intelligence-livestream-tuesday-may-16th-at-10-am-est/
https://oodaloop.com/archive/2023/05/18/openai-hugging-face-and-defcon-31-august-2023-on-red-teaming-large-language-models-and-neural-language-models/
https://oodaloop.com/ooda-original/2023/04/26/the-cybersecurity-implications-of-chatgpt-and-enabling-secure-enterprise-use-of-large-language-models/