Start your day with intelligence. Get The OODA Daily Pulse.
Our Artificial Intelligence tracking, research, and analysis fall into four categories:
Surfacing business strategy insights based on current AI capabilities, such as OODA CTO Bob Gourley’s recent posts on Using Artificial Intelligence For Competitive Advantage in Business and What Leaders Need to Know About the State of Natural Language Processing.
Tracking Responsible and Ethical AI frameworks, guidelines, and projects: The implementation of AI has many moral and ethical implications, which make the maturation cycle of the technology very unique and sets it apart from, say, the growth of the internet or the web browser as a platform sub-category of internet growth in the early 90s. For example, there were not any moral and ethical issues intrinsically embedded into the capabilities of Mosaic 1.0 (yes, yes: I know: “information tends towards freedom” and the whole “freedom driven ethos” embedded into the internet. I get it. But the moral and ethical implications of AI are next level on a relative basis). To date, this research category can sometimes feel like boiling the ocean – although there are some encouraging, similar patterns of insight and exploration amongst academic, governmental, and private sector efforts.
Exponential technologies, including AI, and Great Power competition. See Russians and Chinese are using human targeting – amongst other tools- to achieve a security advantage in key emerging technologies by 2030.
Tracking AI applied technology in various industry verticals (along with market drivers): Through interdisciplinary research and analysis of how AI is operationalizing (with use cases and analysis), we are trying to leverage fully realized AI implementations for insights from the design process which may apply to cybersecurity and AI value propositions and business models. See Open-Source Natural Language Processing: EleutherAI’s GPT-J, and The Current AI Innovation Hype Cycle: Large Language Models, OpenAI’s GPT-3 and DeepMind’s RETRO.
In the private sector, specifically the AI startup and innovation space, there is less of an onus on organizations to engage directly with the moral and ethical implications of their AI capabilities and operational impacts. With limited resources, their primary goals are innovation, growth, and survival in what has become a crowded field. An investment group subject matter expert pointed out in a recent conference call that the market “will sort out the moral and ethical issues.” This writer, and many technologists and policymakers, are less bullish on the market, especially as it relates to experiential learning through AI Accidents (yet another subdiscipline we are tracking for our members).
OODA Loop research efforts are designed to sort through the exponential disruption fueled by artificial intelligence – focusing on how best to frame opportunities for advantage (business, strategic, etc.) to our members. Recent success on the topic of AI was the OODA Salon (hosted by OODA CTO Bob Gourley) with Katharina McFarland on Winning In The Artificial Intelligence Era. Katharina has led change in a wide array of national security domains including Space, Missile Defense, Acquisition, and Nuclear Posture. She was also named as a Commissioner of the National Security Commission on Artificial Intelligence. In this OODA Salon, we hear directly from her on a range of topics, focusing on actions we can take to improve the application of AI to national security while protecting privacy and our way of life.
The Stanford HAI has released its annual Artificial Intelligence Index Report. Key takeaways from the report (which align with and validate OODA Loop AI research findings to date) include:
Private investment in AI soared while investment concentration intensified: The private investment in AI in 2021 totaled around $93.5 billion—more than double the total private investment in 2020, while the number of newly funded AI companies continues to drop, from 1051 companies in 2019 and 762 companies in 2020 to 746 companies in 2021. In 2020, there were 4 funding rounds worth $500 million or more; in 2021, there were 15.
U.S. & China dominated cross-country collaborations on AI: Despite rising geopolitical tensions, the United States and China had the greatest number of cross-country collaborations in AI publications from 2010 to 2021, increasing five times since 2010. The collaboration between the two countries produced 2.7 times more publications than between the United Kingdom and China—the second-highest on the list.
Language models are more capable than ever, but also more biased: Large language models are setting new records on technical benchmarks, but new data shows that larger models are also more capable of reflecting biases from their training data. A 280 billion parameter model developed in 2021 shows a 29% increase in elicited toxicity over a 117 million parameter model considered the state of the art as of 2018. The systems are growing significantly more capable over time, though as they increase in capabilities, so does the potential severity of their biases.
The rise of AI ethics everywhere: Research on fairness and transparency in AI has exploded since 2014, with a fivefold increase in related publications at ethics-related conferences. Algorithmic fairness and bias has shifted from being primarily an academic pursuit to becoming firmly embedded as a mainstream research topic with wide-ranging implications. Researchers with industry affiliations contributed 71% more publications year over year at ethics-focused conferences in recent years.
AI becomes more affordable and higher performing: Since 2018, the cost to train an image classification system has decreased by 63.6%, while training times have improved by 94.4%. The trend of lower training cost but faster training time appears across other MLPerf task categories such as recommendation, object detection, and language processing, and favors the more widespread commercial adoption of AI technologies.
Data, Data, Data: Top results across technical benchmarks have increasingly relied on the use of extra training data to set new state-of-the-art results. As of 2021, 9 state-of-the-art AI systems out of the 10 benchmarks in this report are trained with extra data. This trend implicitly favors private sector actors with access to vast datasets.
More global legislation on AI than ever: An AI Index analysis of legislative records on AI in 25 countries shows that the number of bills containing “artificial intelligence” that were passed into law grew from just 1 in 2016 to 18 in 2021. Spain, the United Kingdom, and the United States passed the highest number of AI-related bills in 2021 with each adopting three.
Robotic arms are becoming cheaper: An AI Index survey shows that the median price of robotic arms has decreased by 46.2% in the past five years—from $42,000 per arm in 2017 to $22,600 in 2021. Robotics research has become more accessible and affordable. (1)
The release of The Stanford HAI Annual AI Index Full Report also includes a link for the download of individual chapters and access to the public data on which the report is based.
The report was also timed with the annual 2022 HAI Spring Conference, which took place last week and is now available via the Stanford HAI YouTube channel.
Michael Kanaan has helped a wide swath of decision-makers better grasp the nature of AI. He has a knack for expressing complex topics in clear, accurate and succinct ways and many of us in the national security community have already had the pleasure of hearing from him in person or in conferences. His book, T-Minus AI: Humanity’s Countdown to Artificial Intelligence and the New Pursuit of Global Power, provides context and insights in a way that can help concerned citizens and business leaders better grasp the issues of AI. He gives us all a call to action to learn more because as he makes clear in the book, the countdown to AI is actually over.
This is the only security framework we have seen that helps prevent AI issues before they develop
This plain english overview will give you the insights you need to drive corporate decisions
By studying issues we can help mitigate them.
Use OODA Loop to improve your decision-making in any competitive endeavor. Explore OODA Loop
The greatest determinant of your success will be the quality of your decisions. We examine frameworks for understanding and reducing risk while enabling opportunities. Topics include Black Swans, Gray Rhinos, Foresight, Strategy, Stratigames, Business Intelligence, and Intelligent Enterprises. Leadership in the modern age is also a key topic in this domain. Explore Decision Intelligence
We track the rapidly changing world of technology with a focus on what leaders need to know to improve decision-making. The future of tech is being created now and we provide insights that enable optimized action based on the future of tech. We provide deep insights into Artificial Intelligence, Machine Learning, Cloud Computing, Quantum Computing, Security Technology, Space Technology. Explore Disruptive/Exponential Tech
Security and resiliency topics include geopolitical and cyber risk, cyber conflict, cyber diplomacy, cybersecurity, nation-state conflict, non-nation state conflict, global health, international crime, supply chain, and terrorism. Explore Security and Resiliency
The OODA community includes a broad group of decision-makers, analysts, entrepreneurs, government leaders and tech creators. Interact with and learn from your peers via online monthly meetings, OODA Salons, the OODAcast, in-person conferences and an online forum. For the most sensitive discussions interact with executive leaders via a closed Wickr channel. The community also has access to a member-only video library. Explore The OODA Community