Start your day with intelligence. Get The OODA Daily Pulse.
Major global, multinational announcements and events related to AI governance and safety took place last week. We provide a brief overview here. In an effort to get off the beaten path, however, and move away from these recent nation-state based AI governance efforts, two recent reports are framing some really interesting isues: How Might AI Affect the Rise and Fall of Nations? and “Governing AI at the Local Level for Global Benefit: A Response to the On-Going Calls for the Establishment of a Global AI Agency.”
The core question remains: Will legacy, human, geopolitical institutions be able to keep up with the “double exponential” growth of artificial intelligence? The implicit answer to this question in the analysis found in these reports is “no” – along with a discussion of what may be the impact of this failure and, in a positive vein, how we can shift to societal structures and policy debates that may be more effective as a response to this “greatest dilemma” of the 21st century.
But first, the OODA Loop News Brief team captured the initial headlines last week. We will continue to track the implementation of these major global announcements in the weeks and months ahead.
Artificial intelligence continues to snare the technological limelight and, rightly so as we move well into the final quarter of 2023, there is wide international interest in harnessing the power of AI. But with the excitement and anticipation come some appropriate notes of caution from governments around the world, concerned that all of AI’s promise and potential has a dark flipside: It can be used as a tool by bad actors just as easily as it can by the good guys. Thus, on October 30, 2023, US President Joe Biden issued the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” while contemporaneously the G-7 Leaders issued a joint statement in support of the May 2023 “Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems.”
The US executive order also references the anticipated November UK Summit on AI Safety, which will bring together world leaders, technology companies and AI experts to “facilitate a critical conversation on artificial intelligence.” Amid the cacophony of international voices trying to bring order to what many see as chaos, it is important for CISOs to understand how AI and machine learning are going to affect their role and their abilities to thwart, detect, and remediate threats. Knowing what the new policy moves entail is critical to gauging where responsibility for dealing with the threats will lie and provides insight into what these governmental bodies believe is the way forward.
Twenty-eight countries including the US, UK and China have agreed to work together to ensure artificial intelligence is used in a “human-centric, trustworthy and responsible” way, in the first global commitment of its kind. The pledge forms part of a communique signed by major powers including Brazil, India and Saudi Arabia, at the inaugural AI Safety Summit. The two-day event, hosted and convened by British prime minister Rishi Sunak at Bletchley Park, started on Wednesday. Called the Bletchley Declaration, the document recognises the “potential for serious, even catastrophic, harm” to be caused by advanced AI models, but adds such risks are “best addressed through international co-operation”. Other signatories include the EU, France, Germany, Japan, Kenya and Nigeria.
The communique represents the first global statement on the need to regulate the development of AI, but at the summit there are expected to be disagreements about how far such controls should go. Country representatives attending the event include Hadassa Getzstain, Israeli chief of staff at the ministry of innovation, science and technology, and Wu Zhaohui, Chinese vice minister for technology. Gina Raimondo, US commerce secretary, gave an opening speech at the summit and announced a US safety institute to evaluate the risks of AI. This comes on the heels of a sweeping executive order by President Joe Biden, announced on Monday, and intended to curb the risks posed by the technology.
The unprecedented historical challenge poised by artificial intelligence has now been definitively:
Experts at the RAND Corporation explore several scenarios “as AI continues to advance, geopolitics may never be the same. Humans organized in nation-states will have to work with another set of actors—AI-enabled machines—of equivalent or greater intelligence and, potentially, highly disruptive capabilities. In the age of geotechnopolitics, human identity and human perceptions of our roles in the world will be distinctly different; monumental scientific discoveries will emerge in ways that humans may not be able to comprehend. Consequently, the AI development path that ultimately unfolds will matter enormously for the shape and contours of the future world.
We outline several scenarios that illustrate how AI development could unfold in the near term, depending on who is in control. We held discussions with leading technologists, policymakers, and scholars spanning many sectors to generate our findings and recommendations. We presented these experts with the scenarios as a baseline to probe, reflect on, and critique. We sought to characterize the current trajectory of AI development and identify the most important factors for governing the evolution of this unprecedented technology.”
Scenarios – framed around the question “Who could control the development of AI?” – include:
For the full scenarios, the RAND report can be found at this link.
Stefaan Verhulst is the Co-Founder and Chief Research and Development Officer of The GovLab, and also one of the Data & Policy Editors-in-Chief. In a recent blog post at the Data & Policy Blog, Verhulst writes:
With the UK Summit in full swing, 2023 will likely be seen as a pivotal year for AI governance, with governments promoting a global governance model: AI Globalism. For it to be relevant, flexible, and effective, any global approach will need to be informed by and complemented local experimentation and leadership, ensuring local responsiveness: AI Localism.
Even as consumers and businesses extend their use of AI (generative AI in particular), governments are also taking notice. Determined not to be caught on the back foot, as they were with social media, regulators and policymakers around the world are exploring frameworks and institutional structures that could help maximize the benefits while minimizing the potential harms of AI. This week, the UK is hosting a high-profile AI Safety Summit, attended by political and business leaders from around the world, including Kamala Harris and Elon Musk. Similarly, US President Biden recently signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which he hailed as a “landmark executive order” to ensure “safety, security, trust, openness, and American leadership.”
Amid the various policy and regulatory proposals swirling around, there has been a notable emphasis on what we might call AI globalism. The UK summit has explicitly endorsed a global approach to AI safety, with coordination between the US, EU, and China core to its vision of more responsible and safe AI. This global perspective follows similar recent calls for “an AI equivalent of the IPCC” or the International Atomic Energy Agency (IAEA). Notably, such calls are emerging both from the private sector and from civil society leaders.
In many ways, a global approach makes sense. Like most technology, AI is transnational in scope, and its governance will require cross-jurisdictional coordination and harmonization. At the same time, we believe that AI globalism should be accompanied by a recognition that some of the most innovative AI initiatives are taking place in cities and municipalities and being regulated at those levels too.
We call it AI localism. In what follows, I outline a vision of a more decentralized approach to AI governance, one that would allow cities and local jurisdictions — including states — to develop and iterate governance frameworks tailored to their specific needs and challenges. This decentralized, local approach would need to take place alongside global efforts. The two would not be mutually exclusive but instead necessarily complementary.
In order to explore the possibilities (and challenges) of AI Localism, The GovLab developed the “AI Localism Canvas,” a pragmatic framework designed to delineate and assess the AI governance landscape specific to a city or region. The canvas serves as a living document and a heuristic tool for local decision-makers, aiding in the meticulous evaluation of risks and opportunities intrinsic to AI deployment. Policies and frameworks can be evaluated along categories such as Transparency, Procurement, Engagement, Accountability, Local Regulation, and Principles, providing a holistic view of the AI governance stack at a local level. This canvas is envisioned to evolve synchronously with the growing imprint of AI at the city level, continually refining the local governance responses to the multifaceted challenges posed by AI.
The advantages of such a canvas, and more generally of AI Localism, lies in its emphasis on immediacy and proximity. It allows researchers, policymakers, and other stakeholders to engender positive feedback loops through a deeper understanding of local conditions and a more granular calibration of AI policies. While we fully recognize (and even embrace) the need for a global perspective, we strongly believe that the global can grow and learn from the local: the general must be built upon the foundations of the particular.
For Verhulst’s full analysis, see AI Globalism and AI Localism: Governing AI at the Local Level for Global Benefit: A Response to the On-Going Calls for the Establishment of a Global AI Agency
For more on the AI Localism Canvas, see The AI Localism Canvas: A Framework to Assess The Emergence of Governance of AI within Cities
Technology Convergence and Market Disruption: Rapid advancements in technology are changing market dynamics and user expectations. See: Disruptive and Exponential Technologies.
The New Tech Trinity: Artificial Intelligence, BioTech, Quantum Tech: Will make monumental shifts in the world. This new Tech Trinity will redefine our economy, both threaten and fortify our national security, and revolutionize our intelligence community. None of us are ready for this. This convergence requires a deepened commitment to foresight and preparation and planning on a level that is not occurring anywhere. The New Tech Trinity.
AI Discipline Interdependence: There are concerns about uncontrolled AI growth, with many experts calling for robust AI governance. Both positive and negative impacts of AI need assessment. See: Using AI for Competitive Advantage in Business.
Benefits of Automation and New Technology: Automation, AI, robotics, and Robotic Process Automation are improving business efficiency. New sensors, especially quantum ones, are revolutionizing sectors like healthcare and national security. Advanced WiFi, cellular, and space-based communication technologies are enhancing distributed work capabilities. See: Advanced Automation and New Technologies
Emerging NLP Approaches: While Big Data remains vital, there’s a growing need for efficient small data analysis, especially with potential chip shortages. Cost reductions in training AI models offer promising prospects for business disruptions. Breakthroughs in unsupervised learning could be especially transformative. See: What Leaders Should Know About NLP