Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > Will the “Double Exponential” Growth of Artificial Intelligence Render Global AI Governance and Safety Efforts Futile?

Will the “Double Exponential” Growth of Artificial Intelligence Render Global AI Governance and Safety Efforts Futile?

Major global, multinational announcements and events related to AI governance and safety took place last week.  We provide a brief overview here.   In an effort to get off the beaten path, however, and move away from these recent nation-state based AI governance efforts, two recent reports are framing some really interesting isues:  How Might AI Affect the Rise and Fall of Nations? and “Governing AI at the Local Level for Global Benefit: A Response to the On-Going Calls for the Establishment of a Global AI Agency.”   

The core question remains:  Will legacy, human, geopolitical institutions be able to keep up with the “double exponential” growth of artificial intelligence? The implicit answer to this question in the analysis found in these reports is “no” – along with a discussion of what may be the impact of this failure and, in a positive vein, how we can shift to societal structures and policy debates that may be more effective as a response to this “greatest dilemma” of the 21st century.  

But first, the OODA Loop News Brief team captured the initial headlines last week. We will continue to track the implementation of these major global announcements in the weeks and months ahead.  

U.S. Executive Order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” and the “Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems”

What the White House executive order on AI means for cybersecurity leaders

Artificial intelligence continues to snare the technological limelight and, rightly so as we move well into the final quarter of 2023, there is wide international interest in harnessing the power of AI. But with the excitement and anticipation come some appropriate notes of caution from governments around the world, concerned that all of AI’s promise and potential has a dark flipside: It can be used as a tool by bad actors just as easily as it can by the good guys. Thus, on October 30, 2023, US President Joe Biden issued the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” while contemporaneously the G-7 Leaders issued a joint statement in support of the May 2023 “Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems.”

The US executive order also references the anticipated November UK Summit on AI Safety, which will bring together world leaders, technology companies and AI experts to “facilitate a critical conversation on artificial intelligence.” Amid the cacophony of international voices trying to bring order to what many see as chaos, it is important for CISOs to understand how AI and machine learning are going to affect their role and their abilities to thwart, detect, and remediate threats. Knowing what the new policy moves entail is critical to gauging where responsibility for dealing with the threats will lie and provides insight into what these governmental bodies believe is the way forward.

The Inaugural AI Safety Summit and “The Bletchley Declaration”

US, China and 26 other nations agree to co-operate over AI development

Twenty-eight countries including the US, UK and China have agreed to work together to ensure artificial intelligence is used in a “human-centric, trustworthy and responsible” way, in the first global commitment of its kind. The pledge forms part of a communique signed by major powers including Brazil, India and Saudi Arabia, at the inaugural AI Safety Summit. The two-day event, hosted and convened by British prime minister Rishi Sunak at Bletchley Park, started on Wednesday. Called the Bletchley Declaration, the document recognises the “potential for serious, even catastrophic, harm” to be caused by advanced AI models, but adds such risks are “best addressed through international co-operation”. Other signatories include the EU, France, Germany, Japan, Kenya and Nigeria.

The communique represents the first global statement on the need to regulate the development of AI, but at the summit there are expected to be disagreements about how far such controls should go. Country representatives attending the event include Hadassa Getzstain, Israeli chief of staff at the ministry of innovation, science and technology, and Wu Zhaohui, Chinese vice minister for technology. Gina Raimondo, US commerce secretary, gave an opening speech at the summit and announced a US safety institute to evaluate the risks of AI. This comes on the heels of a sweeping executive order by President Joe Biden, announced on Monday, and intended to curb the risks posed by the technology.

Background

The A.I. Dilemma

The unprecedented historical challenge poised by artificial intelligence has now been definitively:     

  1. Quantified by Tristan Harris, Co-founder of the Center for Humane Technology, who has shifted from his seminal work on the damage wrought by social media to the “A.I. Dilemma.” Major AI Labs reached out to Harris and his team for assistance in raising public awareness and efforts to explain and make more accessible the issues surrounding the deployment of AI systems.  Both Harris and Elon Musk now couch their primary concern in the quantifiable fact that AI is now on a “double exponential” growth trajectory
  2. Expertly, qualitatively framed by Mustafa Suleyman in his book “The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma.”  Suleyman captures future threats eloquently and accessibly – and also addresses AI’s ability to amplify what we frame here at OODA Loop as  the jagged transitions and binaries fractures as globalization is transformed.  Suleyman frames this “greatest dilemma” as a function of profound systemic change: “Technology is ultimately political because technology is a form of power.” The Financial Times also featured Saleyman is some recent coverage: 

    …the rapid development of the technology threatens to outrun efforts of regulators to control it. The “exponential trajectory” of AI meant that two years from now, the large language models at the centre of current AI development would be 100 times more powerful than OpenAI’s GPT-4, Suleyman said. “That justifies real action” such as the limit on chip sales, he added. Most concerns about AI have been split between the immediate risks posed by today’s AI-powered chatbots and the long-term risk that AI systems will escape human control once they exceed the understanding of their makers, something known as superintelligence. Instead, tech executives such as Suleyman point to an intermediate period that is fast approaching, when the large language models that stand behind today’s chatbots are used in much more significant applications. “Too much of the conversation is fixated on superintelligence, which is a huge distraction,” he said. “We should be focused on the practical near-term capabilities which are going to arise in the next 10 years [and] which I believe are reasonably predictable.” 

AI and Geopolitics:  How Might AI Affect the Rise and Fall of Nations?

Experts at the RAND Corporation explore several scenarios “as AI continues to advance, geopolitics may never be the same. Humans organized in nation-states will have to work with another set of actors—AI-enabled machines—of equivalent or greater intelligence and, potentially, highly disruptive capabilities. In the age of geotechnopolitics, human identity and human perceptions of our roles in the world will be distinctly different; monumental scientific discoveries will emerge in ways that humans may not be able to comprehend. Consequently, the AI development path that ultimately unfolds will matter enormously for the shape and contours of the future world.

We outline several scenarios that illustrate how AI development could unfold in the near term, depending on who is in control. We held discussions with leading technologists, policymakers, and scholars spanning many sectors to generate our findings and recommendations. We presented these experts with the scenarios as a baseline to probe, reflect on, and critique. We sought to characterize the current trajectory of AI development and identify the most important factors for governing the evolution of this unprecedented technology.” 

Scenarios – framed around the question “Who could control the development of AI?” – include:

  • U.S. Companies Lead the Way:  In this world, U.S. government personnel continue to lag behind engineers in the U.S. technology sector, both in their understanding of AI and in their ability to harness its power. Private corporations direct the investment of almost all research and development funding to improve AI, and the vast majority of U.S. technical talent continues to flock to Silicon Valley…In this world, the future relationship between the U.S. government and the technology sector looks much like the present: Companies engage in aggressive data-gathering of consumers, and social media continues to be a hotbed for disinformation and dissension.
  • U.S. Government Takeover AI advances are proceeding at a rapid rate, and concerns about catastrophic consequences lead the U.S. government—potentially in coordination with like-minded allies—to seize control of AI development. The United States chooses to abandon its traditional light-handed approach to regulating information technology and software development and instead embarks on large-scale regulation and oversight...In the defense sector, this could lead to an arms race dynamic as other governments initiate AI development programs of their own for fear that they will be left behind in an AI-driven era. Across the instruments of power, such nationalization could also shift the balance of haves versus have-nots as other countries that fail to keep up with the transition see their economies suffer because of a lack of ability to develop AI and incorporate it into their workforces.
  • Chinese Surprise:  Akin to a Sputnik moment, three Chinese organizations—Huawei, Baidu, and the Beijing Academy of Artificial Intelligence (BAAI)—announce a major AI breakthrough, taking the world by surprise. In this world, AI progress in China is initially overlooked and consistently downplayed by policymakers from advanced, democratic economies. Chinese companies, research institutes, and key government labs leapfrog ahead of foreign competitors, in part because of their improved ability to absorb vast amounts of government funding. State-of-the-art AI models also have been steadily advancing, leveraging a competitive data advantage from the country’s massive population…This combination of world-leading expertise and enormous computing power allows China to scale AI research at an unprecedented rate, leading to breakthroughs in transformative AI research that catch the world off guard. This leads to intense concerns from U.S. political and military leaders that China’s newfound AI capabilities will provide it with an asymmetric military advantage over the United States.
  • Great Power Public-Private Consortium:  Across the world, robust partnerships among government, global industry, civil society organizations, academia, and research institutions support the rapid development and deployment of AI. These partnerships form a consortium that carries out multi-stakeholder project collaborations that access large-scale computational data and training, computing, and storage resources…New and existing international government bodies, including the Abu Dhabi AI Council, rely on diverse participation and contributions to set standards for responsible AI use. The result is a healthy AI sector that supports economic growth that occurs concurrently with the development and evaluation of equitable, safe, and secure AI systems. 

What have we learned?

  • Countries and companies will clash in new ways, and AI could become an actor, not just a factor
  • We are entering an era of both enlightenment and chaos
  • The United States and China will lead in different ways
  • Technological innovation will continue to outpace traditional regulation

What should government policymakers do to protect humanity?

  • Governments should focus on strengthening resilience to AI threats
  • Governments should look beyond traditional regulatory techniques to influence AI developments
  • Governments should continue support for innovation
  • Governments should partner with the private sector to improve risk assessments

For the full scenarios, the RAND report can be found at this link

AI Globalism and AI Localism: Governing AI at the Local Level for Global Benefit: A Response to the On-Going Calls for the Establishment of a Global AI Agency

Stefaan Verhulst is the Co-Founder and Chief Research and Development Officer of The GovLab, and also one of the Data & Policy Editors-in-Chief.  In a recent blog post at the Data & Policy Blog,  Verhulst writes: 

With the UK Summit in full swing, 2023 will likely be seen as a pivotal year for AI governance, with governments promoting a global governance model: AI Globalism. For it to be relevant, flexible, and effective, any global approach will need to be informed by and complemented local experimentation and leadership, ensuring local responsiveness: AI Localism.

Even as consumers and businesses extend their use of AI (generative AI in particular), governments are also taking notice. Determined not to be caught on the back foot, as they were with social media, regulators and policymakers around the world are exploring frameworks and institutional structures that could help maximize the benefits while minimizing the potential harms of AI. This week, the UK is hosting a high-profile AI Safety Summit, attended by political and business leaders from around the world, including Kamala Harris and Elon Musk. Similarly, US President Biden recently signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which he hailed as a “landmark executive order” to ensure “safety, security, trust, openness, and American leadership.”

Amid the various policy and regulatory proposals swirling around, there has been a notable emphasis on what we might call AI globalism. The UK summit has explicitly endorsed a global approach to AI safety, with coordination between the US, EU, and China core to its vision of more responsible and safe AI. This global perspective follows similar recent calls for “an AI equivalent of the IPCC” or the International Atomic Energy Agency (IAEA). Notably, such calls are emerging both from the private sector and from civil society leaders.

In many ways, a global approach makes sense. Like most technology, AI is transnational in scope, and its governance will require cross-jurisdictional coordination and harmonization. At the same time, we believe that AI globalism should be accompanied by a recognition that some of the most innovative AI initiatives are taking place in cities and municipalities and being regulated at those levels too.

We call it AI localism. In what follows, I outline a vision of a more decentralized approach to AI governance, one that would allow cities and local jurisdictions — including states — to develop and iterate governance frameworks tailored to their specific needs and challenges. This decentralized, local approach would need to take place alongside global efforts.  The two would not be mutually exclusive but instead necessarily complementary.

An AI Localism Canvas

In order to explore the possibilities (and challenges) of AI Localism, The GovLab developed the “AI Localism Canvas,” a pragmatic framework designed to delineate and assess the AI governance landscape specific to a city or region. The canvas serves as a living document and a heuristic tool for local decision-makers, aiding in the meticulous evaluation of risks and opportunities intrinsic to AI deployment. Policies and frameworks can be evaluated along categories such as Transparency, Procurement, Engagement, Accountability, Local Regulation, and Principles, providing a holistic view of the AI governance stack at a local level. This canvas is envisioned to evolve synchronously with the growing imprint of AI at the city level, continually refining the local governance responses to the multifaceted challenges posed by AI.

The advantages of such a canvas, and more generally of AI Localism, lies in its emphasis on immediacy and proximity. It allows researchers, policymakers, and other stakeholders to engender positive feedback loops through a deeper understanding of local conditions and a more granular calibration of AI policies. While we fully recognize (and even embrace) the need for a global perspective, we strongly believe that the global can grow and learn from the local: the general must be built upon the foundations of the particular.

For Verhulst’s full analysis, see AI Globalism and AI Localism: Governing AI at the Local Level for Global Benefit: A Response to the On-Going Calls for the Establishment of a Global AI Agency

For more on the AI Localism Canvas, see The AI Localism Canvas: A Framework to Assess The Emergence of Governance of AI within Cities

Additional OODA Loop Resources

Technology Convergence and Market Disruption: Rapid advancements in technology are changing market dynamics and user expectations. See: Disruptive and Exponential Technologies.

The New Tech Trinity: Artificial Intelligence, BioTech, Quantum Tech: Will make monumental shifts in the world. This new Tech Trinity will redefine our economy, both threaten and fortify our national security, and revolutionize our intelligence community. None of us are ready for this. This convergence requires a deepened commitment to foresight and preparation and planning on a level that is not occurring anywhere. The New Tech Trinity.

AI Discipline Interdependence: There are concerns about uncontrolled AI growth, with many experts calling for robust AI governance. Both positive and negative impacts of AI need assessment. See: Using AI for Competitive Advantage in Business.

Benefits of Automation and New Technology: Automation, AI, robotics, and Robotic Process Automation are improving business efficiency. New sensors, especially quantum ones, are revolutionizing sectors like healthcare and national security. Advanced WiFi, cellular, and space-based communication technologies are enhancing distributed work capabilities. See: Advanced Automation and New Technologies

Emerging NLP Approaches: While Big Data remains vital, there’s a growing need for efficient small data analysis, especially with potential chip shortages. Cost reductions in training AI models offer promising prospects for business disruptions. Breakthroughs in unsupervised learning could be especially transformative. See: What Leaders Should Know About NLP

Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.