Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > Revisiting an Interview with OODA CEO Matt Devost on AI, Autonomy and ChatGPT

It is fair to say that the volume and speed of the coverage of artificial intelligence in every area of society have been dizzying.  As a response, this week Brooke Gladstone and her team over at WNYC’s On the Media have updated and re-released an episode from early January 2023 – which includes an interview with Matt.    

The updated show released this week: 

I, Robot 
July 7, 2023

This year, headlines have been dominated by claims that artificial intelligence will either save humanity – or end us. On this week’s On the Media, a reckoning with the capabilities of programs like ChatGPT, and declarations that machines can think. Plus, the potential implications of handing over decision-making to computers. 

We have added a transcript of Matt’s original conversation with On the Media Host Brooke Gladstone in this post.  See below.  

About the original segment from January 2023

https://oodaloop.com/archive/2023/01/16/ooda-ceo-matt-devost-on-the-rise-of-ai-powered-weapons-and-the-implications-of-openais-chatgpt/

About the complete show – It’s a Machine’s World:

Schools across the country are considering whether to ban the new AI chatbot, ChatGPT. On this week’s On the Media, a look at the ever-present hype around AI and claims that machines can think. Plus, the potential implications of handing over decision-making to computers.

1. Tina Tallon [@ttallon], assistant professor of A.I. and the Arts at the University of Florida, on the love-hate relationship with AI technology over the past 70 years, and Nitasha Tiku [@nitashatiku], tech culture reporter for The Washington Post, on the history of the tech itself. Listen.

2. Geoffrey Hinton [@geoffreyhinton], a cognitive psychologist and computer scientist, on holograms, memories, and the origins of neural networks. Listen.

3. Matt Devost [@MattDevost], international cybersecurity expert and CEO and co-founder of the global strategic advisory firm OODA LLC on the rise of AI-powered weapons and what it means for the future of warfare. Listen.

Music:
Original music by Tina Tallon
Horizon 12.2 by Thomas Newman
Bubble Wrap by Thomas Newman
Seventy-two Degrees and Sunny by Thomas Newman
Eye Surgery by Thomas Newman
Final Retribution by John Zorn
Lachrymose Fairy by Thomas Newman

OODA CEO Matt Devost on The Rise of AI-Powered Weapons and the Implications of OpenAI’s ChatGPT

“…if they start to demonstrate an ability to operate in a way that is more humane or cognizant of the human impact than a human decision-maker.”

Brooke Gladstone: [Geoff} Hinton…described his fear of autonomous lethal weapons powered by AI. I followed up on that with Matt Devost, an international cybersecurity expert who started his career hacking into the systems for the U.S. Department of Defense back in the nineties. He gave me the beginner’s class on autonomous lethality.

Matt Devost: Where once a target has been designated by a human decision maker, the weapon will have the autonomy to kind of operate and get there, ready to navigate the terrain properly, and make decisions based on how it achieves the impact of that target for example.

Gladstone: There isn’t a kid back in Oklahoma running it on a board. It can make a decision and change its path based on its own information.

Devost: And probably much more quickly than a human drone operator would be able to achieve. Now, that doesn’t mean that we’re going to take humans out of the decision-making equation with regard to what gets targeted.

Gladstone: Not yet, anyway.

Devost: Not yet, but in how it achieves the mission and the ability to basically act in a swarm capacity and make decisions amongst themselves by adjusting their mission profile based on the swarm intelligence.

Gladstone: Yes, that’s when multiple weapons are simultaneously operating and communicating with each other – 

Devost: …with each other.

Gladstone: – making decisions based on each other’s behavior. That’s drone technology. But how would the next generation of swarming weapons behave?

Devost: What gets really interesting is if they start to demonstrate an ability to operate in a way that is more humane or cognizant of the human impact than a human decision-maker would be able to do, in which case now you start to have some autonomy with regards to the targeting itself.

“There may be a point in time in which AI is a more sensible and objective decision-maker.”

Gladstone: Can you give me an example of that?

Devost: You know, trying to target this facility, but we’re trying to minimize the potential for collateral damage. And the drone is aware enough to know that a bus just pulled up next to the facility where there is autonomy that is built into the weapons that allow them to make a decision or abort a decision or delay a decision based on a situation that even a human being doesn’t have the capacity to make that decision because it’s changing so rapidly.

Gladstone: Right now, we wouldn’t allow weapons to autonomously target, but that could happen one day. And it brings up images of Dr. Strangelove and Fail Safe. 

Devost: That is going to be a concern. I think we’ve articulated pretty clearly, at least at the U.S. government level, that humans will remain in the loop as it relates to targeting other humans. It’s different if you’re targeting drones or you’re targeting a communications tower, etc. But we could reach a point in which the drones are more efficient and more humane decision-makers based on the AI capabilities and analytics that they’re able to achieve the same way that we might someday decide that we should allow only self-driving cars. You know, humans do a really good job of killing a lot of ourselves in motor vehicles every year. There may be a point in time in which AI is a more sensible and objective decision-maker.

“I asked Chat GPT “what do you think Bill meant when he said that?” And it gave an incredible answer.”

Gladstone: Obviously, these new AI tools will have an impact on intelligence gathering and collection. And you say that for you ChatGPT there was a wow moment.

Devost: It was for a couple of reasons, you know. One is that interacts with you based on questions and you’re able to refine it in the same way that you could refine your conversation with a human being. Tell me more or make a counterargument. But it also does a great job of understanding nuanced concepts. I gave an example. A friend of mine, Bill Kroll, used to be the Deputy Director of the National Security Agency had a quote a few years ago where he said “The cybersecurity industry has a thousand points of light but no illumination.”

I asked Chat GPT “What do you think Bill meant when he said that?” And it gave an incredible answer.

It said: 

“When someone says that the cybersecurity industry has a thousand points of light and no illumination, they are expressing frustration with the fragmented and disorganized nature of the industry. The term A thousand points of Light refers to many different players and stakeholders, including government agencies, private companies and individuals, security experts. Each of these players brings their own unique perspective and expertise to the field. But the lack of coordination and collaboration among them makes it difficult to develop a comprehensive and effective approach to cybersecurity.”

Gladstone: Holy cow.

Devost: That is an incredible response. Right? And you can tell that. I want you to give a ranking or rating about how confident you are in your analysis. I also want you to provide a counterpoint. Plus, I want you to provide recommendations as to what we can do about this. So if you go in and ask it, what is the probability that Iran will attack a U.S. bank with a cyber weapon? It gives you a response that flows almost exactly like you would see in an intelligence briefing that might be delivered all the way up to the president’s daily briefing. So it’s fascinating that it is able to not only query all this knowledge and produce these great responses, but it can also frame the response from the perspective of the audience’s expectations.

“…there will be a lot of unique ways in which technology is used in the intelligence community.”  

Gladstone: But it has been shown over and over again that Chat GPT is fundamentally a people pleaser. Yes. It doesn’t care if it’s true or not. Yes, it will invent sources in order to give you something that has the exact format you’re asking for. So you can’t trust anything that Chat GPT says. So how can it be helpful in intelligence gathering?

Devost: Yes, the intelligence community won’t use Chat GPT based on Chat GPT’s existing training dataset. It’ll be used based on data sets that are proprietary to the intelligence community. So what we’re about to see in the next year and in the coming years is these domain-specific versions of Chat GPT, where I control the training data or I tell it that it doesn’t have to be the human pleaser, it doesn’t have to be conversational. It should use the same heuristics that it’s using to derive these answers. But if you don’t have a source, you don’t invent it. You can’t make judgments that aren’t based on a particular source. So it’s a very quick shift to move away from that inherent bias to using that capability in a way that’s very meaningful.

Gladstone: Give me an example. Would it interrogate a prisoner of war?

Devost: I don’t know that it would interrogate a prisoner of war, although you could certainly envision where it might be used to augment a human’s questions that they’re asking. But I think it’ll probably get really good at threat assessment, making recommendations for remediating vulnerabilities. I think analysts might also use it to help them through their thinking. Right? They might produce an assessment and say, tell me how I’m wrong. And the AI serves as almost the 10th man Rule if you will, were there by design taking the counter-argument. So there’ll be a lot of unique ways in which technology is used in the intelligence community.

“…the machines completely replacing me were very creative and fast. You know, that’s an uncomfortable feeling for somebody in the cybersecurity industry.”

Gladstone: How imminent is this kind of technology?

Devost: It’s incredibly imminent. The technology clearly exists. We’re going to see with version 4.0 a version that is much more constrained with regards to not making things up and is much more current. I mean, one of the existing flaws right now with GPT is the training data ends in 2021. If you now start to have it where there are training data current as of whatever it found in the models this morning, that starts to get very, very interesting and means that this technology can be applied around real-term issues in the next year or two years.

Gladstone: So another wow moment you had was a challenge several years ago by DARPA. That is the government agency that drives a lot of amazing technology. It gave us the Internet for one thing, and GPS. Tell me about what happened at that DARPA conference.

Devost: Yes, so that was fascinating for me. In cybersecurity, we have these contests that we call Capture the Flag contests, and they really are ways for people to compete, to demonstrate who’s the top hacker, who’s the top person at attacking systems, new hacks, systems, and you take control of them and then you have to defend the flag. You have to make sure that you patch it, and you fix it, and you prevent other people from taking over that system and booting you off.

Gladstone: This is a cyber war game essentially.

Devost: Yes. So in 2016, they brought the finalists out to DEFCON, which is the largest hacker conference in the world – in Las Vegas. And they had the six finalists compete. That was another aha moment for me, you know, where I felt like I was living in the future, similar to the way I felt when I encountered Chat GPT at the beginning of December. I started my career in 1995. It was my job for the Department of Defense to break into systems and show how they were vulnerable and help system owners patch those systems. And here a machine and the machines completely replacing me were very creative and fast. You know, that’s an uncomfortable feeling for somebody in the cybersecurity industry, not because of the displacement, but because of the lack of explainability or the lack of understanding with regards to how resilient the patching is or making sure that the AI doesn’t lose control of its objectives and do something that ends up being malicious behavior. So it’s definitely a brave new world in that regard.

“Where do we retain agency and where do we decide that the machines can do it better?”

Gladstone: How do we ensure that these weapons are safe to deploy? How do we ensure that they don’t commit war crimes?

Devost: Yeah, I think we’ll have clearly defined ethics around the use of artificial intelligence as it relates to things that could impact human lives or human safety. What’s going to be disconcerting is when we encounter adversaries that don’t have the same ethics, and do we end up having to unleash some sort of autonomy in our weapons because our adversaries have launched autonomous weapons against us? Put in a position of having to violate some of our principles because it’s the only way to appropriately defend ourselves. If we dig a little deeper, though, there are some other core risks. These technologies all run on systems that are vulnerable, so we have an underlying responsibility to make sure that the infrastructure is robust and secure. You also need to make sure that the training data has an open collection model. Chat GPT draws intelligence from the Internet itself that you are aware of adversaries that might try and pollute that environment. What if I decide that putting blog posts up, write websites, take out advertisements, going on Twitter to pursue a particular narrative that will influence the decision-making of a particular AI? And then the third area is going to be around the robustness of the algorithms and making sure that we have removed bias. I think that will drive in the Department of Defense a requirement for what we call explainable AI. The AI has to describe to us in understandable terms how it arrived at that decision.

Gladstone: The debate over drones was that Americans wouldn’t be killed if we use them. Critics say we’ve overused them because the cost to us is so low. We’ve already been able to destroy the world many times over for 70 years. But the ability to be more surgical in our destruction and even to hand off our own autonomy to machines that may well be smarter than we are is a terrifying prospect.

Devost: It is right. We need to figure out what levels of agency we want to retain as it relates to warfighting. We said, well, we want to maintain the decision-making as it relates to other human beings. But what if over and over again, AI makes better decisions, safer decisions than human beings? Do we abdicate that responsibility? Do I lose the agency of being able to interpret what is misinformation with my own brain, or do I abdicate it to an AI System that does it for me? So that is definitely going to be one of the fundamental questions that we face over the next decade. Where do we retain agency and where do we decide that the machines can do it better?

“…would the world be a better place right now if Russia were run by some sort of autonomous AI?”

Gladstone: You seem to be suggesting that it may turn out that humans are far more dangerous.

Devost: In some domains, humans might be more dangerous.

Gladstone: I’m thinking of the Cuban Missile Crisis and how the tape suggests that John Kennedy was pretty much alone in wanting to make that deal to take American missiles out of Turkey so that Khrushchev would take them out of Cuba. I’m just wondering if there had been an advanced Chat bot advisor in the room and whether he would have stood with Kennedy or not.

Devost: Yes, it definitely makes you consider what the training data looks like for a decision like that. I don’t want to think that I’m a fan of abdicating control of the machines. I’m certainly not. We have to figure out which are fundamentally human decisions, and which are the ones that can be automated or augmented.

Gladstone: It depends on what you think of human nature – right? I mean, if there is a machine that is developed to help us fight the best war. Is there a possibility that that machine may say, best not go to war?

Devost: As long as we get it to understand our objectives and our constraints? You know, you could sit and say, would the world be a better place right now if Russia were run by some sort of autonomous AI? Possibly. But, you know, if the AI has been programmed with the same biases, the same tendencies, and the same ambitions, it might be more efficient than Putin in perpetrating these atrocities.

Gladstone: Matt, thank you very much.

Devost: Yes, of course. It was my pleasure. Enjoyed the conversation.

Gladstone: Matt Devost is the CEO and co-founder of the global strategic advisory firm OODA, LLC. And that’s what we got on AI this week!

OODAcon 2023

https://oodaloop.com/archive/2023/02/21/ooda-almanac-2023-useful-observations-for-contemplating-the-future/

The OODA C-Suite Report: Operational Intelligence for Business Leaders

https://oodaloop.com/ooda-original/2019/01/03/ai-will-test-american-values-in-the-battlefield/

https://oodaloop.com/archive/2022/12/05/we-are-witnessing-another-inflection-point-in-how-computers-support-humanity/

https://oodaloop.com/archive/2022/12/12/the-great-gpt-leap-is-disruption-in-plain-sight/

https://oodaloop.com/ooda-original/disruptive-technology/2022/12/22/ooda-loop-2022-the-past-present-and-future-of-chatgpt-gpt-3-openai-nlms-and-nlp/

Tagged: AI Matt Devost
Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.