Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > AI/ML-enabled Systems for Strategy and Judgment and the Future of Human-Computer-Data Interaction Design

Background

For obvious reasons, it is this section of the recent academic paper “Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War” that was of keen interest to us here at OODA Loop:

“From an economic perspective, modern AI is best understood as a better, faster, and cheaper form of statistical prediction. The overall effect on decision-making, however, is indeterminate. This implies that organizations, military and otherwise, will be able to perform more predictions in the future than they do today, but not necessarily that their performance will improve in all cases.

The Decision-making Process:  Economic decision theory emerged alongside the intellectual tradition of cybernetics. (27) As Herbert Simon observed over sixty years ago, “A real-life decision involves some goals or values, some facts about the environment, and some inferences are drawn from the values and facts. ” (28) We describe these elements as judgment, data, and prediction. Together they produce actions that shape economic or political outcomes. Feedback from actions produces more data, which can be used for more predictions and decisions or to reinterpret judgment. The so-called OODA loop in military doctrine captures the same ideas. (29) Decision cycles govern all kinds of decision tasks, from the trivial  (picking up a pencil) to the profound (mobilizing for war).  The abstract decision model [OODA Loop]  is agnostic about implementation, which means that the logic of decision might be implemented with organic, organizational, or technological components.” (1)

This website (oodaloop.com), for example, is an implementation that uses organic, organizational, and technological components to generate foresight strategy, risk awareness, decision intelligence, signal tracking, sensemaking and to build a community to inform the “logic of decisions”around complex global, security, technological, and societal uncertainties.  The website and the OODA Loop community can then be used to inform a personal and/or organizational OODA Loop (and, as a result, tactical and strategic decision-making processes).

The aforementioned “Prediction and Judgment” paper was written by Avi Goldfarb (the Rotman Chair in Artificial Intelligence and Healthcare and Professor of Marketing at the University of Toronto and a research associate at the National Bureau of Economic Research) and Jon R. Lindsay (Associate Professor at the School of Cybersecurity and Privacy and the Sam Nunn School of International Affairs at the Georgia Institute of Technology).

While there are many insights in the paper for business leaders about how best to think about different types of artificial intelligence (AI) and machine learning (ML) implementations and the risk implicit in AI-enabled systems for strategy, decision-making, and judgment, the paper did spark a debate here at OODA Loop regarding what some saw as the author’s assumption about how the military thinks internally about human capital and how military command and control frameworks will inform the future of AI-enabled strategy.

OODA Network Member Junaid Islam in a January 2019 “AI Will Test American Values In The Battlefield“, captured some of the internal OODA Loop positions on these issues:

“AlphaZero is the first general-purpose, intuition-based AI system (referred to as “Type B”).  Unlike its Type A ancestors, AlphaZero has no notion of human values.  Instead, it uses reinforcement learning to get progressively better at supercomputer speed. The programmers behind AlphaZero hoped to create a better version of Stockfish.  What they got was Dr. Hannibal Lecter.  In AlphaZero’s 2017 and 2018 matches against Stockfish, it readily gave up its own pawns to trick Stockfish into an unwinnable board position. The sacrificing of pawns was used in every game that it played. Youtube is filled with chess analysts describing AlphaZero as brutal. If AlphaZero were human, you’d probably worry about being alone with it.

At present, there is a huge rush to deploy general-purpose, intuition-based AI systems in battlefield applications ranging from data analysis to weapons system management. Given our first glimpse of what an intuition-based AI  system is capable of, here are a few items the Pentagon and Congress should consider:

Mandate System Partitioning: It’s very tempting to connect data analysis and weapons system management using an intuition-based AI system to create the perfect OODA Loop. Unfortunately, the speed by which AlphaZero-type systems can launch attacks is frightening.  Thus analytics and weapons control systems should have a triple layer of a network, authentication, and cryptographic separation between them.

Require Human Override: The gut reaction to an AI system going “postal” is to mandate an “Off Button”. However, on the battlefield that is not an option  – as the enemy would only need to confuse an AI system with bad data to shut it down.  A more practical approach is to require a human override mode in case Hannibal is having a bad day.

Ban Machine-based Killing: The ability to instantly acquire a target and kill it is the dream of every military commander.  Undoubtedly, military leaders in China and the Middle East will adopt machine-based killing. But winning a battle at all costs is not American. To be American means to consider the moral dimension of every action. It means giving the enemy (no matter how hated) the option to put down their weapon as an act of Grace. Congress must ban machine-based killing and decree that only a human can kill.

The U.S. Armed Forces are unlike any other.  It places a huge emphasis on the life of the individual warfighter. Even the highest General shows respect to the lowest warfighter. In movies, the American soldier is the hero.  However, in the world of “Type B AI” the American soldier, our hero, is an expendable asset. We must never let that happen.”

For some in the commercial sector, Junaid’s recommendation to “ban machine-based killing” may feel like it does not apply to private sector implementations of machine learning.  But the potential for AI-based accidents and the growing controversy over the accident rate in the autonomous vehicles/”future of mobility” subsector) make it a salient point for the commercial space as well.  What rate or amount of fatalities is morally and ethically permissible as the autonomous future puzzles itself out on the road?  An industry-wide ban on machine-based fatalities would dictate 0%.  What would be the really tough, strategic market implications of such a commitment by technology companies?

Overall, the “compare and contrast” structure of “the Prediction and Judgment” paper is very effective. Decision-making frameworks in the military and commercial sectors are broken down into component parts, case studies, and use cases that, when side by side, illuminate sophisticated core issues in a very clear manner. Military strategy has always informed business strategy  – and this paper is an interesting contribution to the literature, effectively extending military strategic thinking to the challenges at hand in machine learning.

And, upon further analysis, it becomes clear that the paper is not a group of academics “talking at” the military or offering a facile, misinformed perspective on the vital role of people, leadership, and good judgment in the military.  In fact, the authors highlight this quote from the 2018 Department of Defense Artificial Intelligence Strategy in the context of their research:  “The women and men in the U.S. armed forces remain our enduring source of strength; we will use AI-enabled information, tools, and systems to empower, not replace, those who serve.” (116)  ‘Yet, the strategy’s stated goal of  ‘creating a common foundation of shared data, reusable tools, frameworks and standards, and cloud and edge services’ is more of a description of the magnitude of the problem than a blueprint for a solution.” (117)

What Next?  Key Insights from the Research

1 – AI Complements (over AI Substitutes) in Business and War:  As Goldfarb and Lindsay note early in the research paper:  “One of the key insights from the literature on the economics of technology is that the complements of a new technology determine its impact. (7)  AI, from this perspective, is not a simple substitute for human decision-making. Rapid advances in machine learning have improved statistical prediction, but prediction is only one aspect of decision-making.  Two other important elements of decision-making—data and judgment—represent the complements to prediction. Just as cheaper bread expands the market for butter, advances in AI that reduce the costs of prediction are making its complements [data and judgment] more valuable.

The contestation of AI complements, therefore, is likely to unfold differently than the imagined wars of “AI substitutes” (8).  It is reasonable to expect organizational and strategic context to condition the performance of automated systems, as with any other information technology (13).  AI may seem different, nevertheless, because human agency is at stake. Recent scholarship raises a host of questions about the prospect of automated decision-making:

  • How will war “at machine speed” transform the offense-defense balance? (14)
  • Will AI undermine deterrence and strategic stability (15) or violate human rights? (16)
  • How will nations and coalitions maintain control of automated warriors? (17)
  • Does AI shift the balance of power from incumbents to challengers or from democracies to autocracies? (18)

These questions focus on the substitutes for AI because they address the political, operational, and moral consequences of replacing people, machines, and processes with automated systems. The literature on military AI has focused less on the complements of AI, namely the organizational infrastructure, human skills, doctrinal concepts, and command relationships that are needed to harness the advantages and mitigate the risks of automated decision-making. (19)

In this article, we challenge the assumptions behind AI substitution and explore the implications of AI complements. An army of lethal autonomous weapon systems may be destabilizing, and such an army may be attractive to democracies and autocracies alike. The idea that machines will replace warriors, however, represents a misunderstanding about what warriors actually do. We suggest that it is premature to forecast radical strategic consequences without first clarifying the problem that AI is supposed to solve. We provide a framework that explains how the complements of AI (i.e., data and judgment) affect decision-making.”

2 – Avoid the Risk of Low-quality Data when Considering a Machine Learning Implementation for Strategy and Judgment: AI coverage saturates the business “thought leadership” space.  The article does a great job using military organizational priorities and the challenges of the “fog of war” to reinforce that “Machines are good at prediction, but they depend on data and judgment, and the most difficult problems in war are information and strategy. The conditions that make AI work in commerce are the conditions that are hardest to meet in a military environment because of its unpredictability.” (2) We would argue that strategic and competitive advantage in this period of simultaneous crises and unprecedented uncertainty is tantamount to war conditions for most decision-makers and business leaders in the private sector.

With that, we encourage you to apply to your organization the manner in which these authors frame risk for AI systems designed for strategy and judgment functions in the military and in war conditions to your organization:  “War…usually lacks abundant unbiased data, and judgments about objectives and values are inherently controversial, but that doesn’t mean it’s impossible. The researchers argue AI would be best employed in bureaucratically stabilized environments on a task-by-task basis.  All the excitement and the fear are about killer robots and lethal vehicles, but the worst case for military AI in practice is going to be the classically militaristic problems where you’re really dependent on creativity and interpretation,” Lindsay said. “But what we should be looking at is personnel systems, administration, logistics, and repairs.” In short: keep humans in the loop when the logic of decisions requires this same creativity and interpretation.  Do not accept wholesale the business literature that suggests machine learning solves this business problem.

3 – The Weaponization of Data and the Potential Weaponization of Entire Decision-making Processes:  In this era of unrelenting cyberattacks on corporate and critical cyber infrastructure,  additional human intervention is needed to guard against efforts by the enemy (i.e. geopolitical adversaries, nation-state cyber forces, non-state cyber actors, or just plain industry competitors) to weaponize data which is used in a machine learning systems for the decisionmaking process that informs strategy and judgment.

The authors make a compelling case that the threat attack surface is fundamentally expanded when integrating data systems into component parts that are mission-critical to final judgments.  As we have learned the hard way from social media and disinformation, any new information vector can be weaponized with disastrous unintended consequences that are difficult to untwine or, worse, impossible to put back into the bottle:

“There are also consequences to using AI for both the military and its adversaries, according to the researchers. If humans are the central element in deciding when to use AI in warfare, then military leadership structure and hierarchies could change based on the person in charge of designing and cleaning data systems and making policy decisions.

This also means adversaries will aim to compromise both data and judgment since they would largely affect the trajectory of the war. Competing against AI may push adversaries to manipulate or disrupt data to make sound judgment even harder. In effect, human intervention will be even more necessary.  If AI is automating prediction, that’s making judgment and data really important.  We’ve already automated a lot of military action with mechanized forces and precision weapons, then we automated data collection with intelligence satellites and sensors, and now we’re automating prediction with AI. So, when are we going to automate judgment, or are there components of judgment that cannot be automated?” (2)

Sources:

7. See, for example, Timothy F. Bresnahan, Erik Brynjolfsson, and Lorin M. Hitt, “Information Technology, Workplace Organization, and the Demand for Skilled Labor: Firm-Level Evidence,” Quarterly Journal of Economics, Vol. 117, No. 1 (February 2002), pp. 339–376, https://doi.org/10.1162/003355302753399526; and Shane Greenstein and Timothy F. Bresnahan, “Technical Progress and Co invention in Computing and in the Uses of Computers,” Brookings Papers on Economic Activity: Microeconomics (Washington, D.C.: Brookings Institution Press, 1996), pp. 1–83.

8. See, for example, Paul Scharre, Army of None: Autonomous Weapons and the Future of War (New York: W.W. Norton, 2018); Michael C. Horowitz, “When Speed Kills: Lethal Autonomous Weapon Systems, Deterrence and Stability,” Journal of Strategic Studies, Vol. 42, No. 6 (2019), pp. 764–788, https://doi.org/10.1080/01402390.2019.1621174; James Johnson, “Artificial Intelligence and Future Warfare: Implications for International Security,” Defense & Security Analysis, Vol. 35, No. 2 (2019), pp. 147–169, https://doi.org/10.1080/14751798.2019.1600800; and Kenneth Payne, I, Warbot: The Dawn of Artificially Intelligent Conflict (New York: Oxford University Press, 2021).

14. Kenneth Payne, “Artificial Intelligence: A Revolution in Strategic Affairs?” Survival, Vol. 60, No. 5 (2018), pp. 7–32, https://doi.org/10.1080/00396338.2018.1518374; Paul Scharre, “How Swarming Will Change Warfare,” Bulletin of the Atomic Scientists, Vol. 74, No. 6 (2018), pp. 385–389, https://doi.org/10.1080/00963402.2018.1533209; Ben Garfunkel and Allan Dafoe, “How Does the Offense-Defense Balance Scale?” Journal of Strategic Studies, Vol. 42, No. 6 (2019), pp. 736–763, https://doi.org/10.1080/01402390.2019.1631810; and John R. Allen, Frederick Ben Hodges, and Julian Lindley-French, “Hyperwar: Europe’s Digital and Nuclear Flanks,” in Allen, Hodges, and Lindley-French, Future War and the Defence of Europe (New York: Oxford University Press, 2021), pp. 216–245.

15. Jürgen Altmann and Frank Sauer, “Autonomous Weapon Systems and Strategic Stability,” Survival, Vol. 59, No. 5 (2017), pp. 117–142, https://doi.org/10.1080/00396338.2017.1375263; Horowitz, “When Speed Kills”; Mark Fitzpatrick, “Artificial Intelligence and Nuclear Command and Control,” Survival, Vol. 61, No. 3 (2019), pp. 81–92, https://doi.org/10.1080/00396338.2019.1614782; Erik Gartzke, “Blood and Robots: How Remotely Piloted Vehicles and Related Technologies Affect the Politics of Violence,” Journal of Strategic Studies, published online October 3, 2019, https://doi.org/10.1080/01402390.2019.1643329; and James Johnson, “Delegating Strategic Decision-Making to Machines: Dr. Strangelove Redux?” Journal of Strategic Studies, published online April 30, 2020, https://doi.org/10.1080/01402390.2020.1759038.

16. Ian G.R. Shaw, “Robot Wars: US Empire and Geopolitics in the Robotic Age,” Security Dialogue, Vol. 48, No. 5 (2017), pp. 451–470, https://doi.org/10.1177/0967010617713157; and Lucy Suchman, “Algorithmic Warfare and the Reinvention of Accuracy,” Critical Studies on Security, Vol. 8, No. 2 (2020), pp. 175–187, https://doi.org/10.1080/21624887.2020.1760587.

17. Heather M. Roff, “The Strategic Robot Problem: Lethal Autonomous Weapons in War,” Journal of Military Ethics, Vol. 13, No. 3 (2014), pp. 211–227, https://doi.org/10.1080/15027570.2014.975010; Heather M. Roff and David Danks, “‘Trust but Verify’: The Difªculty of Trusting Autonomous Weapons Systems,” Journal of Military Ethics, Vol. 17, No. 1 (2018), pp. 2–20, https://doi.org/ 10.1080/15027570.2018.1481907; Risa Brooks, “Technology and Future War Will Test U.S. Civil-Military Relations,” War on the Rocks, November 26, 2018, https://warontherocks.com/2018/11/ technology-and-future-war-will-test-u-s-civil-military-relations/; and Erik Lin-Greenberg, “Allies and Artificial Intelligence: Obstacles to Operations and Decision-Making,” Texas National Security Review, Vol. 3, No. 2 (Spring 2020), pp. 56–76, https://dx.doi.org/10.26153/tsw/8866.

18. Horowitz, “Artificial Intelligence, International Competition, and the Balance of Power”; Ben Buchanan, “The U.S. Has AI Competition All Wrong,” Foreign Affairs, August 7, 2020, https:// www.foreignaffairs.com/articles/united-states/2020-08-07/us-has-ai-competition-all-wrong; and Michael Raska, “The Sixth RMA Wave: Disruption in Military Affairs?” Journal of Strategic Studies, Vol. 44, No. 4 (2021), pp. 456–479, https://doi.org/10.1080/01402390.2020.1848818.

19. A notable exception is Horowitz, “Artificial Intelligence, International Competition, and the Balance of Power.” We agree with Horowitz that organizational complements determine AI diffusion, but we further argue that complements also shape AI employment, which leads us to different expectations about future war.

27. The main ideas from the economics literature on decision-making are summarized in Itzhak Gilboa, Making Better Decisions: Decision Theory in Practice (Oxford: Wiley-Blackwell, 2011). See also John D. Steinbruner, The Cybernetic Theory of Decision: New Dimensions of Political Analysis (Princeton, N.J.: Princeton University Press, 1974). On the intellectual impact of cybernetics generally, see Ronald R. Kline, The Cybernetics Moment: Or Why We Call Our Age the Information Age (Baltimore, Md.: Johns Hopkins University Press, 2015). Classic applications of cybernetic decision theory include Karl W. Deutsch, The Nerves of Government: Models of Political Communication and Control (New York: Free Press, 1963); and James R. Beniger, The Control Revolution: Technological and Economic Origins of the Information Society (Cambridge, Mass.: Harvard University Press, 1989).

28. Herbert A. Simon, “Theories of Decision-Making in Economics and Behavioral Science,” American Economic Review, Vol. 49, No. 3 (June 1959), p. 273, https://www.jstor.org/stable/1809901.

29. OODA stands for the “observe, orient, decide, and act” phases of the decision cycle. Note that “orient” and “decide” map to prediction and judgment, respectively. These phases may occur sequentially or in parallel in any given implementation. On the influence of John Boyd’s cybernetic OODA loop in military thought see James Hasík, “Beyond the Briefing: Theoretical and Practical Problems in the Works and Legacy of John Boyd,” Contemporary Security Policy, Vol. 34, No. 3  (2013), pp. 583–599, https://doi.org/10.1080/13523260.2013.839257.

116. Summary of the 2018 Department of Defense Artificial Intelligence Strategy, 2019, p. 4.

117. Ibid., p. 7.

Stay Informed

It should go without saying that tracking threats are critical to inform your actions. This includes reading our OODA Daily Pulse, which will give you insights into the nature of the threat and risks to business operations.

Related Reading:

Explore OODA Research and Analysis

Use OODA Loop to improve your decision-making in any competitive endeavor. Explore OODA Loop

Decision Intelligence

The greatest determinant of your success will be the quality of your decisions. We examine frameworks for understanding and reducing risk while enabling opportunities. Topics include Black Swans, Gray Rhinos, Foresight, Strategy, Stratigames, Business Intelligence, and Intelligent Enterprises. Leadership in the modern age is also a key topic in this domain. Explore Decision Intelligence

Disruptive/Exponential Technology

We track the rapidly changing world of technology with a focus on what leaders need to know to improve decision-making. The future of tech is being created now and we provide insights that enable optimized action based on the future of tech. We provide deep insights into Artificial Intelligence, Machine Learning, Cloud Computing, Quantum Computing, Security Technology, Space Technology. Explore Disruptive/Exponential Tech

Security and Resiliency

Security and resiliency topics include geopolitical and cyber risk, cyber conflict, cyber diplomacy, cybersecurity, nation-state conflict, non-nation state conflict, global health, international crime, supply chain, and terrorism. Explore Security and Resiliency

Community

The OODA community includes a broad group of decision-makers, analysts, entrepreneurs, government leaders, and tech creators. Interact with and learn from your peers via online monthly meetings, OODA Salons, the OODAcast, in-person conferences, and an online forum. For the most sensitive discussions interact with executive leaders via a closed Wickr channel. The community also has access to a member-only video library. Explore The OODA Community

Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.