Start your day with intelligence. Get The OODA Daily Pulse.
As we begin to ramp up for OODAcon 2023 (October 25th in Reston, VA), we return to the Keynote Conversation between Bob Gourley and Vint Cerf at OODAcon 2022. Find the full transcript below, as well as a link to the audio file.
There are numerous takeaways from the conversation. We encourage you to spend time with the entire transcript. Two questions from Bob and Vint’s responses, however, are worth highlighting at the top of this post – as they significantly influenced our 2023 research agenda and our collective, internal OODA Loop here at OODA.
“No. No. This is not an algorithm…Wetware is the only way that you can make this work.”
The first was a conversation on whether the wicked problem of dis-, misinformation was solvable purely through technological approaches. Like many, in 2022 we felt like we were “boiling in the ocean” in our research tracking the various initiatives attempting to frame and offer solutions for the problem. Overall, there was clearly no silver bullet. Which prompted this question from Bob and a response from Vint:
Gourley: “….of all the risks we face on the internet, one of them that seems to be one of the hardest to mitigate is the threat of misinformation and disinformation today. And as a technologist, sometimes I find myself thinking, “Can’t we solve this algorithmically?”…
Cerf: …No. No. This is not an algorithm…Wetware is the only way that you can make this work. And it is not just critical thinking. It is the willingness to expend the time and energy to think critically – to insist on doing that. And I can tell you that there are some families who don’t like this idea because, you know, the kids come home, and they question their parents’ views and beliefs. Some parents don’t like that very much – and yet that is the price you pay for critical thinking.
Cerf’s response is paraphrased from the more extended discussion on the topic you will find below. But this clear, definitive answer from Cert about the non-technological-facing nature of the solution was what we needed to hear to pivot our research away from tracking the problem broadly and throwing a wide net looking technological solutions, to a return confident return to our thesis that the problem and the solution are really at the level of what we call “Cognitive Infrastructure” – framed in Bob’s 2019 post “America’s Most Critical Infrastructure is also Our Most Neglected Infrastructure.” We will be revisiting our cognitive infrastructure insights in Q323 in the run-up to OODAcon 2023.
“…this has to be “with eyes wide open” – which means that we cannot pretend that this is not a potentially hazardous environment.”
The second was in response to Bob’s question about the origins of the internet and where we are today – which essentially reinforced the points Matt Devost has made in his welcoming address to OODAcon 2022:
“So, the world that we are in right now – in the cyberspace world – is one in which great harm and great value can be derived. And our job, I think, collectively is to figure out how to harvest the utility of this online environment while defending against the potential harms and risks – which means this has to be “with eyes wide open” – which means that we cannot pretend that this is not a potentially hazardous environment. So the job that many of you and I have is to improve our ability to resist harmful behavior and to hold people and organizations and even countries accountable for their behavior in this online environment. And to give agency to people, to institutions, and even countries to defend themselves against harm. So that’s a rather long…answer, but that is where we are today.”
We had our eyes wide open over the course of the entire day at OODAcon 2022. As we will continue to keep our eyes wide open at OODAcon 2023. Please join us.
In the meantime, enjoy this first installment of the fascinating conversation below – which also includes a discussion of future trends in the growth of the internet, subsea cables, low-orbit satellites, 6G, and the true nature of autonomous devices.
“…some of the people in this room might debate that. In fact, if there is no debate about that this conference is not a success. We really need to talk about that.”
Bob Gourley: Good morning Vint.
Vint Cerf: Good Morning. Okay. Well, let us know if your brain cells fry, by the way. I don’t know about anybody else, but after listening to that talk, I needed a change of underwear.
Gourley: Well, ladies and gents, I know everybody in this room knows Vince Cerf. So I want to do a compressed bio. Vince Cerf is the guy that many of us regard as the founder, the father of the internet…
Cerf:…well, then that would be wrong because Bob Kahn and I were two hands on one pencil – and we had a lot of help.
Gourley: Okay. One of the fathers then. Would that be accurate?
Cerf: Thank you. That is more accurate.
Gourley: Yeah. And people also recognize you as an executive at Google since 2005, where you continue to help shape policy and standards globally and continue to track and improve the status of the internet globally.
Cerf: I don’t know about the improvement part, but I keep sticking my nose into it anyway.
Gourley: And I think people may also remember; if they don’t, I’ll tell you. He is a Turing Award Winner – widely regarded as the Nobel Prize for Computer Science. And a recipient of the Presidential Medal and many other recognitions. But the way I describe you Vent is to me as a technologist – you are the world’s most interesting man. <Laughs>
Cerf: And what kind of beer do I drink, right? Yes.
Gourley: And sometimes I just say he’s the architect of the Matrix we live in – and it is been a good Matrix so far.
Cerf: Oh, well, some of the people in this room might debate that. In fact, if there is no debate about that this conference is not a success. We really need to talk about that.
“We have a problem.” And I look at him and I say: ”What do you mean, we?”
Gourley: All right, then let’s get into it because I did want to ask you some questions that are forward-looking. But first I wanted to ask your assessment of where we’ve been. Can you describe the days from the early generation of the internet to today? How did we get here?
Cerf: <Laugh>, This is sort of like, “Please describe the universe in 25 words or last. Give three examples.” Right? Well first of all when this started, it was an engineering project. As you all know, the Defense Department was trying to figure out if packet switching would actually work as a way of supporting computer communication. And of course, it was heresy at the time. You know, of course, the way you communicated was with circuit switching. I mean, it is been doing that for how many “guzumpt” years since 1876. And it works. We have a global demonstration of that: the telephone system. So why do you need this packet-switching stuff? Well, part of the answer, of course, is that you didn’t want to dial up a computer, wait until the thing picked up the phone, and then talk to it and hang up again, and then dial the next one.
Cerf: We wanted electronic postcards that would go as fast as computers would. So we did the ARPANET project – and it worked. And again, this is engineering. So then we, in the course of that project, discovered some applications: remote access to timeshare machines, file transfers, and then email – networked email – which turned out to be a really important discovery because we recognized very early on the social aspects of email distribution lists. The first two distribution lists that I joined were sci-fi lovers and yum yum, which was a restaurant review distribution list. So then, you know, you could see the beginnings of much of what we see today in that origin.
Then Bob Kahn shows up in my office at Stanford and says: “We have a problem.” And I look at him and I say: ”What do you mean, we?” and he says “Well, you know, the Defense Department now believes – this is 1973 – “that computers could be useful in command and control.” But the implication of that is that some of them would be in mobile vehicles, some would be on trips at sea, and some would be in aircraft. And all we had succeeded in doing with the ARPANET was to connect computers with dedicated telephone circuits. And you know, then [this hardware] was in air-conditioned rooms, and they didn’t get up and move around. So the problem with dedicated telephone circuits is that the ships get all tangled up, but the tanks aren’t over the wires, and they break them, and the airplanes never make it off the tarmac. So he had already started working on the mobile radio system in a packet satellite system. So his problem was how do we hook them all together and make them look uniform? And that is where the internet protocols came from.
“It is simply an environment in which all kinds of things can happen – good and bad – just as the universe we live in has the capacity to do both.”
Over a period of about six months, we got that sorted out. So all of this is just straight, honest-to-God engineering, all the people involved didn’t want to wreck it. They wanted to try to make it work. And so these were the halcyon days of computer networking. But as the system has evolved and as the World Wide Web shows up in 1991 and much more visibly in 1993 with Mosaic, and then in 1994 with Netscape Communications – and as the system has penetrated more and more deeply into our society, and as the public got access to it, which started around 1989, but becomes very visible after Netscape Communications goes public in 1995, and the dot-com boom is on, as it has penetrated deeply into our society – we now see that this is not either a good or evil environment. It is simply an environment in which all kinds of things can happen – good and bad – just as the universe we live in has the capacity to do both.
So, the world that we are in right now – in the cyberspace world – is one in which great harm and great value can be derived. And our job, I think, collectively is to figure out how to harvest the utility of this online environment while defending against the potential harms and risks – which means this has to be “Eyes Wide Open” – which means that we cannot pretend that this is not a potentially hazardous environment. So the job that many of you and I have is to improve our ability to resist harmful behavior and to hold people and organizations and even countries accountable for their behavior in this online environment. And to give agency to people, to institutions, and even countries to defend themselves against harm. So that’s a rather long, more than 25-word answer, but that’s where we are today.
“Good science is discovering that your theory is wrong and amending the theory to match the data that you get.”
Gourley: Thank you. And of all the risks we face on the internet, one of them that seems to be one of the hardest to mitigate is the threat of misinformation and disinformation today. And as a technologist, sometimes I find myself thinking, “Can’t we solve this algorithmically?”…
Cerf: …No. No. This is not an algorithm. Let me give you a well-worked example, all right? So you, let’s pretend for a minute that you are a scientist, and I am coming to you with a problem. And so I described the problem to you, and based on your experience and your knowledge, you say, “You should do X for some value of X.” So I thank you for that. And I go away, 10 years later, I come back, and I say, “Listen, I still have a problem.” And you say “Well, I’ve been studying this problem for the last decade, and now based on this new knowledge that I have, you should do Y – which is different from X.” Now, I could have one or two reactions to that reaction. Number one would be: “Thank you. I really appreciate having the advantage of another 10 years of research.” And I’ll go off and do Y.
But I could have a different reaction: “So you lied to me 10 years ago about X. So I don’t believe Y either.” Now, here is the problem with that. There are too many people who think that science is absolute. And it is not. It is the best approximation we have to understand how the universe works now and, in the future, we may have a better understanding of that – and we may actually discover our theories were wrong. That is what good science is. Good science is discovering that your theory is wrong and amending the theory to match the data that you get.
I love this little illustration: Let’s imagine you are a scientist – and you’re a really good scientist – and you say: “I’m going to test my theory.” And so you devise an experiment and then you predict – you draw this graph and you’ve got all the places where the theory says the experiment should show results.
“…figuring out whether something is misinformation or disinformation is hard. And it is not algorithmically easy to do that.”
And so you start the experiment, and you get this one and this one, and this one’s matching perfectly by the end. This curve is perfect there. And then you get this point over here. Now, there are only two kinds of scientists. One of them will look at that and say “Eh, measurement error. And because everything else fits. So my theory is perfect.” The other guy looks at that and he says “Huh, that’s funny.” He’s the guy that gets the Nobel Prize when he figures out what is that doing there? So the thing that we need to remind people of is that you can’t necessarily tell when something is misinformation. It might be genuinely legitimately asserted to be correct based on the current knowledge. But since we accept in the scientific world that our beliefs may be wrong – and we falsify them by doing experiments.
So the problem we run into is that even if all the evidence shows that this is correct, later we may discover it isn’t. And so that a classic way in which you introduce misinformation, to make it believable, is to mix it together with a whole bunch of other stuff that people know to be correct or know to be true or believe to be true.
And so you have a bunch of things that everybody believes – and then you sneak in this other little thing – and because it is sort of hiding in the midst of other truths, it just sort of looks like, well, and I know all these other things are true, so this must be true too. And so it is a really subtle problem to try to solve. And the only way that you can really solve that, I believe, is to think very critically about what you see.
You have to ask questions about where this information comes from: Is there any corroborating evidence? Who is providing this information? And what purpose might they have in delivering it to me? And even when you do all those things, you may still run into the science problem that I was just describing – where it looks like it was correct, but later you find out it wasn’t. So figuring out whether something is misinformation or disinformation is hard. And it is not algorithmically easy to do that. And you can’t be correct all the time, but you could certainly think critically. Now that takes work. It really does take work. And there are people who don’t want to waste the time and energy to think critically. So they would rather turn to revealed wisdom – and rely on other people.
“….the erosion of trust really destroys our ability to distinguish misinformation and disinformation from that which we should accept.”
So now you have a problem in figuring out, well, if I’m going to go down that path – who should I rely on? Whose advice should I rely on? Who should I trust?
And one of the charts that was up in the morning’s presentation talked about trust and the potential erosion of trust in our society. That is probably one of the most serious socioeconomic problems we face today is the erosion of trust. And I’m sorry to keep on going here, but I can’t resist. Gallup. Jim Clifton is the chairman of the board of Gallup – and he made a presentation a few months ago, weeks ago, on this subject. And he showed a chart measuring the level of trust that we have in, not only in this country but in other countries around the world in various institutions. And, much to my dismay, a lot of the institutions that we had trusted had diminished – our trust in them as a society has diminished over time.
And the most devastating indication is that trust even in the military, which has historically been very high in the United States, where the integrity of the military has been considered, if not absolute, certainly very high – that even that has eroded over time. And, of course, at the bottom of the list are the politicians – they are the least trusted entity in the U.S., anyway, and generally in other places as well. Whether that’s justified or not, you know, we could debate. But the erosion of trust really destroys our ability to distinguish misinformation and disinformation from that which we should accept.
Gourley: Right. I saw that presentation and I noted a huge drop in the trust of the court systems…
Cert:…yes, yes…
Gourley:…which is very telling, and these are huge concerns, and it makes me worried about our future. I keep wanting to think there must be some technological approaches or things we can do, but maybe it just has to be teaching critical thinking to every citizen and hoping for the best.
Cert: Wetware is the only way that you can make this work. And it is not just critical thinking. It is the willingness to expend the time and energy to think critically – to insist on doing that. And I can tell you that there are some families who don’t like this idea because, you know, the kids come home, and they question their parents’ views and beliefs. Some parents don’t like that very much – and yet that is the price you pay for critical thinking.
“…the access to underlying internet infrastructure…to the point where you cannot avoid….access the internet even if you wanted to, because with sufficient coverage, every square inch of the planet, including 70% of it that is water…”
Gourley: Right. Well, Vint, I also want to ask a bit about the future of the internet – and what you see coming next. Can you give us any insights there?
Cerf: Well, look, the trends are pretty clear.
The first one is increased connectivity – and more implementation of fundamental infrastructure that makes the internet work. So that is telecommunications capabilities.
https://oodaloop.com/archive/2023/05/24/undersea-telecommunications-cables-and-the-seabed-are-geopolitical-contested-arenas/
Second, much to my surprise, I don’t know how many of you track TeleGeography, but that is the firm that keeps track of where all the subsea cables are. And it is jaw-dropping to see how much investment is being made in subsea cable. There are things that are being connected in a way that I never thought would connect – islands in the middle of the Pacific or Atlantic – because the cost is dropping for building those cables. And the cables have become increasingly capable of carrying photons longer and longer distances without having to put in repeaters, which requires, of course, energy and power to go to the repeaters under the water in order to repeat the signal.
So we are seeing a significant increase in subsea cables. We are building sub-sea cables now for our own networks at Google. And, in some cases, we have built the whole thing for our own use – as opposed to having to share it with somebody else in order to afford the cost. So that is one thing that’s happening. The other thing that is happening, of course, is that low earth orbit satellites are showing up – Starlink being the most visible right now. And it is clear that it works. So that increases the access to underlying internet infrastructure, almost to the point where you cannot avoid being able to access the internet even if you wanted to, because with sufficient coverage, every square inch of the planet, including 70% of it that is water – is accessible to internet communications. So that’s the second thing that is happening.
“…the definition of edge keeps changing depending on where you stick the computing power.”
The third thing that’s happening is the mobile phone evolution from 4G to 5G – and whatever 6G turns out to be. And here I want to pause for a second and mention a couple of things about the 6G standards that are exciting. You probably – those of you who’ve been tracking this stuff – know about this notion of mobile edge computing or you hear the term edge computing. It is very weird, in a way because, you know, where the hell is the edge? I always thought, “Here’s this network and anything that’s on the outside of the network is at the edge.” But then cloud computing comes along and somehow people think of it as being in the center, but it isn’t. It is really at the edge of the network.
And then on the other end is the consuming side, the client side. And now we’re sticking computing in between the client and the cloud, and we’re calling that edge computing. So the definition of edge keeps changing depending on where you stick the computing power.
But what’s interesting about the 6G design is not that it has edge computing capability – which potentially reduces latency for some kinds of applications- but it turns out that the applications can not only take advantage of the edge computing capability to serve a particular application – but that same edge computing capability also has the ability to control how the communications is configured and what the parameters are.
“The thing I worry about is a headline that says ‘A Hundred Thousand Refrigerators Attack Bank of America.’”
So suddenly an application can say something about the kind of service it is likely to get from the communication system that is different. So I’m very interested to see how that evolves.
The second thing about 6G is that it doesn’t necessarily have to be limited to radio. So if you think about this, there is nothing stopping you from imagining having this architecture where you have the cloud with applications going. You have this 6G “edge” thing, and then what does the communications look like? It could be radio, but it could also be fiber, it could be coaxial cable, it could be anything that can carry bits. And so the idea of being able to use the application to manage the configuration and behavior of the communication system, radio or not, is the thing that really grabbed my attention.
So I’m seeing the possibility of a network that is much more adaptable to the applications that it is trying to support. So that’s sort of the, – oh, and then of course there are programmable devices – which is another point that was already made. By the way, I can’t resist: autonomy, of course, is a big deal these days. What is that? It is a program that’s running an open loop without necessarily any control at all. And then, the question is: What does autonomy imply? Okay, so I have a program, I don’t care whether it is machine learning, AI, or just you know, a run-of-the-mill piece of Python code. The fact that it is just running and doing things that we have given it the ability to do without further intervention is what makes it autonomous. So when people run around being worried about AI – my reaction is: you should worry about plain old ordinary code too. Don’t get hung up on trying to make AI and ML safe. You should worry about this plain old code in the toaster [that was in an earlier slide]. That slide, by the way, made me think of something I had said a few years ago where I said “The thing I worry about is a headline that says ‘A Hundred Thousand Refrigerators Attack Bank of America.’”
Right. <Laugh>. And, you know, and I thought, haha, isn’t that funny? No, it is not funny. If you remember the webcams and the Dyn DDos attack, you know that is a real problem.
“So it is very likely that we need to build more artifacts in the universe that the car – the autonomous vehicle – can sense.”
Cert: So just coming back to this question of autonomy. I want you to think for just a second about imagining what you think of when you hear the word cyberspace. It is an artifact. It is an artificial environment and an autonomous thing operating in cyberspace operates in that space – based on whatever its sense of the space is. So if you’re an autonomous vehicle – you operate in – although it looks like it is operating in the real world – it is not. It is operating in the world that it perceives. And, in a sense, that perception is an artificial space – based on how the sensors work and how well they work. And so when you start thinking about autonomy and you think of it as being something operating in the world – our real world – the real world that we sense, and experience may not be the same as the real world that the autonomous device senses.
And so it is operating in an environment that is different from ours. And the question is: How far apart are those two senses of the environment? And if they are far enough apart, there are some high-risk factors there. So I don’t know about you, but we have a company – one of the Alphabet companies is called Waymo and we make self-driving cars.
But among the various things that I wonder about is how do I know if the self-driving car knows that I’m there? I look for eye contact and a driver in order to figure out whether or not the driver sees me. And that is why they tell you the way that you cross the street in Rome is to get a newspaper and hold it up like this because the drivers can’t see you. So in theory, they know that you can’t see them. And so they feel some responsibility not to run into you. If, if you don’t have the newspaper, then it is your fault if the car runs into you because you’re the one that’s supposed to get out of the way. But I’m not sure how we signal each other if you’re an autonomous car and you’re, you know, in the crosswalk.
So it is very likely that we need to build more artifacts in the universe that the car – the autonomous vehicle – can sense. This might be signaling, for example: if you’re in a crosswalk, there could be signals radiating saying don’t move across this barrier because there is somebody there. You can think of all kinds of ways of hacking that and causing all kinds of traffic jams, I’m sure. But we may need to augment the signaling that autonomous vehicles are capable of sensing in order to make for a safer environment. So it is a very interesting kind of speculation to think about how we actually design and build systems like that.
Be on the lookout for the final installment of this conversation later this week.
https://oodaloop.com/archive/2022/10/18/welcome-to-oodacon-2022-final-agenda-and-event-details/
https://oodaloop.com/archive/2023/06/05/the-oodacon-2022-welcome-address-by-ooda-ceo-matt-devost-surviving-exponential-disruption/