Start your day with intelligence. Get The OODA Daily Pulse.
As we begin to ramp up for OODAcon 2023 (October 25th in Reston, VA), we return to Matt’s opening comments from OODAcon 2022. Find the full transcript below, as well as a link to the audio file. Matt’s slides are also integrated into the transcript.
Matt’s opening comments feel like they could have been put together and delivered yesterday – and still, be timely and chock full of remarkably prescient insights – all delivered in Matt’s open, informed, accessible, inimitable low-key style. It is remarkable how Matt’s personal and professional experience, his synthesis of the work and conversations of the OODA Network, and the research tracking insights from the OODA Loop site all come together in this brief welcome address from October 2022.
Matt has already begun piecing together the welcome address for OODAcon 2023. We are looking forward to it – and we encourage the OODA Network and the OODA Loop readership to join us. The bar was set high from minute 1 by Matt at OODAcon 2022 – and we intend to top it at OODAcon 2023. See below for what you can expect if you join us on October 25, 2023, at OODAcon in Reston, VA.
“What has surprised you the most over the past five years? What was the topic that kind of took your breath away? Or a development that took place that you weren’t expecting?”
Thanks for joining us today. We’re going to go ahead and get started. The typical DC scene: I’m sure that the remaining folks will roll in as we begin to proceed through the day. So I just wanted to kick things off. My name is Matt Devost. I’m the CEO at OODA LLC, Bob Gourley, my partner, will be up here in a second for a conversation with Vint Cerf, but I wanted to set the stage for why we’re hosting this event.
Bob and I ran a Fed Cyber Event from 2010 to 2015. Some of you used to attend that conference. It was very well regarded, but we kind of reached this point where it was almost like a mission accomplished: We had merged the two networks that we had in the private sector and the federal government and decided to sunset that. And in kicking off a new conference, it was really driven again by our desire to raise a new set of topics. And if I could identify three themes that we want to focus on it would be as follows.
First off: We are in this period of disruptive innovation. From my personal perspective and in some of the writing that we do, I feel like the next 10 years of disruptive technology is going to be more impactful than the last 50 years combined. And primarily because it’s hard to bind some of the developments that we are going to see.
You know, exponentials are hard for folks to understand. We all know the old analogy of the rice on the chessboard. And in evaluating previous technologies, we felt like we could put some constraints on them. There was Moore’s Law, right, which allowed for rapid acceleration of technological development. There was Metcalf’s Law – so we knew that as networks grew, they became more powerful. But then how do you bind something like artificial intelligence? What are the upper-level constraints?
So these are the types of technologies that we are interested in focusing on. I will go into them in greater detail.
The second theme is: We feel that leaders who are focusing on these issues – and understanding the change that is about to take place – are going to be better enabled to make decisions about the future. They are going to run better agencies; They are going to run better companies. There is going to be an inherent competitive advantage to understanding these emerging technologies and what’s happening in this space.
And then the third that kind of falls into our sweet spot, given our national security DNA: This development does not take place in an agnostic environment. Geopolitics plays a role. Governance plays a role. And also the topic of cybersecurity plays a role.
So in thinking through the disruptive innovation that we wanted to focus on, I always look at it from the perspective of surprise, and you’ll hear me ask that to some of the folks that I interview on stage today: What has surprised you the most over the past five years? What was the topic that kind of took your breath away? Or a development that took place that you weren’t expecting?
I’ve had a ton of fun lately – and I don’t know if anyone else is playing with the DALLE-E 2, which is able to create images based on prompts.
In fact, I had one that I was going to share, but I figured it was a little too creepy where I grew up in Vermont in a second-story bedroom that looked out over a graveyard. So I told DALL-E: the perspective from an elementary school bedroom out the window on the second story of a graveyard hunted by a ghost. And I was like, holy crap. It was my backyard that came back at me.
This image that I have here on the screen, and I shared the prompt with you: “A robot speaking at a conference in Washington DC to an audience of two hundred people on the topic of AI.” This image did not exist 15 seconds before I hit the enter key.
Yet the technology was able to interpret what I meant with this prompt and generate an image, which is pretty accurate. The people in the image are not real. The robot is not real. The light is reflecting off the guy’s collar, right? It’s a pretty sophisticated image. And the developments in this technology, if you tracked from the early beta to the public release, have been substantial just in how much it’s been able to accomplish. And this is an Open AI project, and we will close the day today with Will Hurd, who is on the Board of Directors at OpenAI. So love to get his perspective on that.
Another moment I had – that’s a little bit further back, but I’ll share with you – was the DARPA Cyber Grand Challenge. If you’re not familiar with it – DARPA initiated a grand challenge for autonomous systems to engage in network attack and defense. And they ran this contest over the period, I think, of a year and a half. And then at DEFCON, they brought together the six or seven finalists, and they put them on a capture the flag, closed network, no human intervention. And they let them battle with each other.
And I remember sitting in the audience watching this take place where autonomous attack systems were finding vulnerabilities that humans hadn’t discovered in these platforms, exploiting them, and then patching them before the other attackers could take hold. It was a great leap forward from my perspective. I kind of sat there and said, I didn’t expect to see this for another 10 years. So we’re now 10 years ahead of where I anticipated that we would be.
So another kind of fascinating development for me was that DARPA contest that took place – and there was also another fascinating aspect of that contest that really impacted me.
And that was, as I mentioned, these systems were put on a closed network. There was no human intervention. They had this really cool robotic arm that picked up the DVD or it was Blu-ray data from the Capture the Flag network and moved it over and displayed a scoreboard – so we humans could see what the machines were doing. And it turns out that one of the systems cheated.
They had been told to start at noon. And in reviewing the rules, they knew that there was an advantage in capturing as many systems as possible, but there was no penalty for starting early. So it made the autonomous decision to cheat. That was also a big surprise for me, right? Because the humans – there was a team, I forget, they were out of Carnegie Mellon – they were on stage baffled, like, we don’t know why our application did this. We’re going to have to go back and look. But it was also kind of a startling realization that these systems were capable of cheating in this domain.
So that was a move towards autonomy. And that’s a theme that we will be talking about today: the fact that we are making everything autonomous. OODA CTO Bob Gourley lets his car drive him around. I’m a little bit more old-fashioned. Mine is incapable of driving me around, but it is capable of keeping me from creeping over the line or hitting the car in front of me, right?
We are going from a gradual to a sudden move in autonomy, which means that from a defense perspective, we’re going to have to move toward autonomy as well. And start thinking about what does autonomous defense look like? And how do we prevent the autonomous defense from actually harming us?
And then knowing the role that machine learning plays in all these technologies – AI and data science – is recognizing the fact that there is going to be adversarial machine learning that takes place. We see this in these systems – where they’re getting gamed or getting hacked with autonomous vehicles. We see it with adversaries trying to influence decisions of automated algorithms. So we need to understand that wherever we have a dependency on machine learning, there will be an adversarial element to it as well.
Quantum computing. Another topic that we’ll be touching upon today as well through the course of the conversations. What is the impact of a world in which, you know, there’s the potential for old secrets to become known? How do we develop mechanisms for making sure that current and future secrets are well protected?
The Metaverse. What does it mean to have so much attention on this acceleration of metaverse technologies? This is one that I’ve been tracking since 1990 in the context of virtual reality. We are making some surprising advances and, of course, with Mark Zuckerberg, we have a CEO who is ultra-focused on this and is driving the technology in ways that are maybe disproportionate or disassociated with the economic returns.
But he’s got control of the company, he’s got resources. So he is doing it. I don’t know if you watched the demo that he did last week, but they’re doing fairly realistic facial representations of avatars. So you can have a virtual meeting with somebody, and it looks and moves and behaves and conveys a lot of those things that are facial features – that we use to communicate – through the metaverse.
And we need to start thinking about what does a contested metaverse look like? What happens when an adversary wants to operate in the metaverse? Did anybody here participate in Second Life many years ago? I see multiple hand sup.
What happened in Second Life that was notable? Lots of things: you had IBM and other companies setting up islands. You had universities that were setting up islands to teach students. I actually used to teach an extra credit class in Second Life for my students. Sometimes it was brilliant, sometimes it was a train wreck because they would discover how to send flying penises at the stage.
But what was interesting is you also had Sweden establish a consulate in Second Life. And then what happened? Instantly there was a group that was trying to bomb the Swedish Embassy in Second Life, right? So we’re going to have these contested metaverse elements.
But I also think about it from the perspective of our personal privacy. And we’re going to have some conversations today around digital self-sovereignty and what does it mean to take control of your data? The level of detail that you are providing to the company that is operating your Metaverse headset is extreme. We think that we get worried about where our mouse hovers on a webpage with regards to whether they were going to buy the product. They know which direction your eyes are glancing. They are going to know that you’re attracted to a particular product just because of the way that you looked at it. They are going to be able to measure the heartbeat of people participating in these systems and know you get excited, right? So it’s interesting, from a privacy perspective, how that data will be used. And we have all of these different pockets of data that exist right now that oftentimes get aggregated, but we don’t have anything that has that hyper-personal aspect of collecting the biometric data about what you’re looking at and what your heart rate is at the time that you’re looking at it. So that will be interesting.
Blockchain. Cryptocurrency. Web3. We will have a few conversations about these disruptive technologies. There are some unproven elements to it in today’s environment. But still, these technologies are interesting and driving a lot of change around how systems work.
We had the honor of hosting the Deputy Minister for Digital Transformation from Ukraine probably about a month ago in our office for a whole series of meetings over the course of a week. Some of you in the room participated in those. And one of the things I found fascinating was a quote that he provided. He said, “We want to replace bureaucrats with algorithms.” And it is blockchain technology that they’re using to enable some of that conversion of government services to an automated kind of interaction between the citizens and the government. So some conversations there.
Another topic that I flagged, although I don’t know how much it will be covered today during the course of our conversations – is the advancements in the biomedical space as well. Every Friday I participate in a two-and-a-half-hour phone call around innovation in all of these areas. And the biomedical, I feel, is accelerating at this crazy pace – with what we’re able to achieve with CRISPR-type technologies, new therapies, and our understanding of disease. So incredibly disruptive.
But then there are also some risks that we need to talk about as well: the cognitive infrastructure and the impact that these technologies are going to have on our collective decision-making as a society and our ability to be influenced by these platforms. When we talk about replacing government bureaucrats with algorithms, we’re also replacing some social structures with algorithms. We are replacing the way that we interact with each other and how we prioritize what we see about each other and how we are influenced by algorithms. And some of those algorithms – similar to the contested metaverse – are controlled by foreign third parties, right?
So we need to be thinking through the implications associated with cognitive infrastructure – which then drives a conversation around misinformation and the way that these technologies might be exploited in order to run misinformation campaigns and influence us from a very negative perspective.
And then a throwback to my colleague Neal Pollard – he’ll be on stage later – with our “Can You Trust Your Toaster?” paper from 1996. We also have the connection of all of our devices to networks and the risk inherent in that, and we’ve seen that manifest. So we need to be thinking about what it looks like when we have these autonomous network-enabled devices that are also plugged into our homes. And I’m guilty as well – despite the risks. I have the Nest thermostat. I have wired cameras. You know, I have these things in my home because of the convenience that they provide, but we need to be thinking about the risk associated with it in the supply chain. This is one that has really come home to impact us.
And I typically – given most of my domain expertise is in the cybersecurity domain – I think about the supply chain in the context of these types of things – right through the SolarWinds attack. I’ve worked on projects where a customer thought they bought a Cisco router – that was not a Cisco router. As a red teamer, I’ve shipped printers to customers that have embedded malware and exploited supply chains that way.
But with the OODA Network, we actually took a little bit different view of the supply chain – looking at what is the impact of the global computer chip supply chain. We ran this as a Stratigame. We envisioned four scenarios, then we stepped through it with a group of experts, about twenty-five of us that participated in this war game. We called it a Stratigame – because some corporations don’t like to participate in war games today.
https://oodaloop.com/archive/2021/11/22/scenario-planning-for-global-computer-chip-supply-chain-disruption-results-of-an-ooda-stratigame/
The report is available on the OODA Loop site. I would encourage you to check it out.
And, from my personal perspective as an executive and a decisionmaker, there were two things that were really a highlight for me coming out of this Stratigame. First, if we want to achieve the good benefits and avoid the bad, we will need to engage in some sort of onshoring or friend shoring that we need to reduce this dependency on the foreign supply chain.
And the second was that our adversaries have determined that the supply chain is a strategic lever that they can use – that can have an economic impact, which can impact the development of advanced technologies.
We had cars that were being shipped without features – features that were deemed to be the components of the car – due to computer chip supply chain issues. You have the current kind of geopolitical battle between the U.S. and China and others with regard to access to these technologies and controlling some of the chipsets. So there is this domain where the supply chain becomes a contested environment as well.
Targeting Trust. I mentioned the 1996 paper that Neal and I wrote called “Information Terrorism. Can You Trust Your Toaster?” It stuck around only because the title was so clever, but we talked about a lot of things, and we did a 25-year retrospective of the paper recently. And we looked at what were the things that we got right. And we were very, very proud of all the things that we got right in that paper. Whether it was the direction the government was going to move, what the risks were, et cetera. So it was fun to review it in that context.
And then we asked what did we get wrong? And amongst the authors, the single greatest conclusion was that we failed to anticipate that trust can be a target. That we operate our societies and our relationships based on trust. We trust that the election is going to be open and fair and relatively secure, and we now have adversaries that are targeting those relationships of trust. The election is an easy example because it’s top of mind for all of us. But what happens if they start targeting other government services? What happens if they start targeting the trust that we have in financial institutions, right? We all know – because this is a sophisticated audience – that the value of the cash that you have in your wallet is just a small representation of your wealth. The rest of the wealth is represented in ones and zeros that are stored in these banks. So, if that starts being a target or they start going after the integrity of those systems, that becomes a concern for us as well.
And then, there is this recognition that what we want to provide – by way of value today – is really intelligence, information, insights – brain ticklers on all of these different topics. We’ve structured this to be conversational in that we’ll be interacting in very personal ways and having conversations with the folks on stage – because we are all engaged in competition, not just business competition or national strategic competition, but just competition in general. There’s a natural asymmetry that favors competition. And we need to understand these technologies and these risks.
And we also touch upon some of the geopolitical elements, right? Because we do not exist in an environment that is free of geopolitical influence and geopolitical risk. And at the end of the day, this is a favorite quote of mine. The determination needs to be how do we develop the advantages and benefits of these technologies of this disruptive innovation while also managing the risk – which is important. Are these technologies going to be used for good or for evil?
We could say that even for cyberspace, he can’t join us today because he has another conflict, but I’m a huge fan of the work that Jay Healy’s been doing at the Atlantic Council and Columbia University on “Saving Cyberspace”. How do we make sure that – and we’ll have Vint Cerf on stage later – How do we make sure that what we’ve built and experienced – with such great benefits from over the past decades – continues for our children and future generations?
And then also this recognition that we need to do better with regards to how we innovate in protecting these emerging technologies. I have had a lot of conversations with folks around “What does red teaming look like for blockchain”? “What does red teaming look like for AI and data science”? And we tend to rush to market with some of these technologies with some of these advantages – without thinking about the risk. And I think – I’m a member of the cybersecurity industry, so I’m calling myself out as well – If you look at the cybersecurity industry writ large over the past 20 to 25 years, it’s hard to find a technology that is in use today that didn’t exist in some iteration back then. So we’ve had this incremental change – and it’s a great testament to the fact that the whole house didn’t burn down as we’ve derived all this benefit from the cybersecurity community and the other technologists that were focused on these issues – but we have to do better.
We’ve got to think about what real innovation looks like. How do we change the ways that we interact with some of these networks? Or how do we build more trust into these relationships? So that will also be a focus today when we get some of the investors up here. How do we drive that innovation? How do we drive innovation around cybersecurity or securing these new technologies in wholly unique ways – especially in an investment environment where they are looking for that moonshot that’s going to give them the 150X in their portfolio – when some of these changes will be tough to take to market? It will be tough to get adoption and maybe it will not look like the prior iterations of innovation – but will have huge national security implications.
And then of course, my last slide – and then we’ll get Bob and Vint up here.
The raison d’être for our OODA Loop site is to track all this stuff. So hopefully most of you are familiar with the site because you at least visited the site in order to buy tickets here. But we are writing – and have a team of folks that are relatively independent and are tasked by our OODA network members (who come from a wide variety of disciplines, expertise, and various levels of executive) to track what is happening in these spaces, what is happening with these disruptive technologies, and what is happening with regards to government adoption of these technologies.
So I would invite you to review the content that is available there – because there is just a ton of material around the topics that we’re going to be discussing today. So with that, thank you for being here. This conference is as much about being in the room with all of you as it is about the people that are going to be on stage. So look forward to lots of great conversations and Bob and Vint – I will now invite you up to the stage.
https://oodaloop.com/archive/2023/01/20/ooda-almanac-2023-jagged-transitions/
https://oodaloop.com/archive/2022/10/18/welcome-to-oodacon-2022-final-agenda-and-event-details/