Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > Flaws in the U.S. Vulnerabilities Equities Process

Last week, the security community was in a flurry around the disclosure of a severe vulnerability (known as CVE-2020-0601) in Microsoft’s Windows operating system. Notably, it was because the National Security Agency (NSA) tipped off Microsoft, helping the tech giant patch the flaw instead of exploiting it for national security missions. NSA was praised for its cultural shift from offense to defense, however, in my opinion, not all that glitters is gold.

This event has brought much needed attention to the Vulnerabilities Equities Process (VEP)—the manner by which the U.S. government determines whether to withhold or disclose zero-day vulnerabilities. The inherent struggle between competing offensive and defensive interests makes the VEP incredibly difficult to implement.

Check out Jason Healey’s valuable analysis of the VEP’s origins and implications here.

Although I agree with his take on the VEP’s priorities, I am unconvinced that the process has been as cut and dry as described. With the release of the unclassified charter in 2017, we can more easily call attention to process inconsistencies and continue to raise critical questions. In this article, I focus my assessment around the misleading nature of the modern VEP and how it influences U.S. cybersecurity.

Does the VEP Actually Favor Defense?

Numerous senior officials have gone on record to reinforce the claim that the default position of the VEP is to disclose vulnerabilities. When the unclassified charter was finally published online, Rob Joyce, former White House Cybersecurity Coordinator, impressively stated that the NSA disclosed upwards of 90 percent of all vulnerabilities that went through VEP.

While 90 percent seems like a significant statistic, what is included in NSA’s calculation of that number? Due to the classified nature of the individual vulnerability discussions, we may never actually figure that out. But, we should have some healthy skepticism about whether the NSA is over inflating its disclosure numbers. How might that happen, you ask?

Take for example the Shadow Brokers leaks that dumped a treasure trove of alleged NSA hacking tools online. “By that point, the intelligence value” of the exploits was “degraded,” so it was decided that NSA would alert whatever vendors were affected, a former senior administration official said. As some included zero-day exploits, those vulnerabilities should have been disclosed through the VEP. This goes to show how disclosure may not always operate as we expect. When disclosures are made because an agency’s hand was forced, rather than as a result of good-natured deliberation, the VEP may be less altruistic than most tend to believe.

Additionally, the public discussion so far has mostly focused on NSA tools. There is almost no reporting on qualifying vulnerabilities from the Central Intelligence Agency, U.S. Cyber Command, Federal Bureau of Investigation, or others. Is the collective average of the VEP much lower than the reported 90 percent? What are the updated statistics and do those numbers still hold true in 2020?

Risk Management: Knowing to Ask the Right Questions

Zero-day vulnerabilities have been described as the most valuable of all bugs, yet the consequences from exploitation fluctuate greatly between them. Because of the varied effects, the VEP charter references a risk management approach that considers factors such as reliance and severity in the evaluation criteria. Interestingly, a few cybersecurity experts have considered another factor by asking, are vulnerabilities sparse or dense?

Healey’s article breaks the concept down as follows: If vulnerabilities are sparse, then every one you find and fix meaningfully lowers the number of avenues of attack that are extant. If they are dense, then finding and fixing one more is essentially irrelevant to security and a waste of the resources spent finding it. Six-take-away-one is a 15% improvement. Six-thousand-take-away-one has no detectable value.

But what if that one vulnerability made it possible to break into hundreds of thousands of devices? WannaCry has been one of the most infamous cyberattacks to date, infecting more than 230,000 computer systems in 150 countries and cost approximately $4 billion in financial losses. Its success was attributed to EternalBlue, an alleged NSA hacking tool caught up in the Shadow Brokers leak.

In the years since this attack, we’ve seen how much damage just one zero day is capable of. It was uncovered that China was able to steal a series of NSA’s “Eternal” exploits (ones that attack the SMB network protocol) by capturing the code “like a gunslinger who grabs an enemy’s rifle and starts blasting away” at least 14 months before WannaCry spread globally at the behest of North Korean hackers. The hacking tool was also repurposed for the NotPetya and BadRabbit campaigns shortly after.

Some have postulated that once a zero-day vulnerability is discovered, its value is severely diminished. I don’t believe that has been the case. The lifespan of major vulnerabilities is shockingly enduring. For example, Stuxnet and its variants are still active years after the release of a patch. Criminal actors and other nation states continue to leverage the vulnerability, taking a much different approach than the U.S.’ preference to burn them, perpetuating the risk even further.

This long-term risk may also be fueled by the frequency of reused software code. In an article by Threatpost, “The amount of insecure software tied to reused third-party libraries and lingering in applications long after patches have been deployed is staggering. It’s a habitual problem perpetuated by developers failing to vet third-party code for vulnerabilities, and some repositories taking a hands-off approach with the code they host.”

As developers continue to reuse code, the ability for VEP members to accurately estimate risk becomes next to impossible. For products that share an incredible amount of code, either between versions or services, thousands of researchers would be required to determine the full scope of vulnerable systems. All this to say, we may be underestimating the threat from zero-days because the vulnerabilities could exist across multiple products and third-parties.

Furthermore, our risk analysis should also address the probability of zero-day collision rates, or the likelihood that two (or more) independent parties will discover a vulnerability. If there was a high collision rate, the disclosure by one party should theoretically “disarm” the vulnerability’s use by another. Since there are plenty of vulnerabilities to go around, some experts believe that adversaries are likely to have a set of zero-days that are mostly different to one’s own and as a result, little value comes from disclosure.

In my opinion, focusing on the numbers and overlooking the criticality of the vulnerabilities is short sighted. We have seen how one vulnerability can produce a significant global impact (WannaCry) and that the cascading effects from zero-day exploits continue long after disclosure (Stuxnet). Besides, a RAND study found that rediscovery rates can get as high as 40 percent given enough time.

The VEP should be evaluating the risk criteria as a whole and include questions like: Does this vulnerability give a user initial access to a system or administrator privileges? Is this product found globally or is it restricted to a regional audience? Can you execute the exploit remotely or is physical access also required? What is the likelihood that this vulnerability is found in Section 9 companies (the most critical of U.S. critical infrastructure)?

“Trust the Process”

In the rollout of CVE-2020-0601 and its patch, reports have made clear that the NSA did not see any exploitation of the vulnerability in the wild. But, NSA’s ability and authority to detect breaches in civilian agencies and domestic U.S. critical infrastructure should be practically nonexistent.

Actually, there are no realistic means for any executive branch department to detect such a thing without either incredible overreach or previously established agreements with private sector companies. The most likely agency to get wind of a breach would be the Department of Homeland Security, and they rely heavily on voluntary reporting. In other words, VEP members have no sure way to know that an exploit may already be compromised before choosing how to respond.

Once you peek behind the VEP curtain, other process questions start to emerge as well. The vague threshold for entry requires submissions “as soon as is practicable,” but according to experts, it takes about two years to fully utilize and integrate a discovered vulnerability. When does the interagency conversation begin? When NSA has an inkling of a successful zero-day or after hundreds of thousands of dollars were spent on testing it? The longer that window, the harder an agency will fight to restrict it. And what happens after a vulnerability disclosure, if a vendor doesn’t meet “USG requirements”? The charter goes no further in its explanation of what requirements those may be.

To get really in the weeds, how does something as simple as voting in the VEP work? Does the Department of Defense (DoD) get one vote for its agency or four (counting NSA, U.S. Cyber Command, and the DoD Cyber Crime Center)? While mundane to cover, I have seen government programs lose all credibility as a result of faulty processes.

Trust is built by saying what you mean and doing exactly that. The public charter is a step in the right direction, but there are still unanswered questions about its implementation. As an administrative policy that has no legal obligation, are agencies being transparent and clear in their intentions?

Future Considerations

The White House intended to build trust along with influence best practices for the international community through its publication of the charter. To maintain that good faith, there has to be assurance that the program works as intended.

Strategic threats, like the “going dark” problem and trends towards great power competition, push the needle towards offensive interests. Private sector systems become collateral damage, as vulnerabilities may remain restricted for longer periods of time. The public needs greater certainty that the right people, who possess the necessary authority and accountability, are weighing these equities very carefully.

In an article by Computer Weekly, Amit Yoran, chairman and CEO at Tenable and a founding director of the US’s Computer Emergency Readiness Team, said it was rare, if not unprecedented, for a government agency to disclose its discovery of a critical vulnerability with a supplier. I applaud the NSA’s decision to come forward with this vulnerability, but we must still ask what triggered this disclosure. Does the agency have another method for targeting crypt32.dll, the underlying component of CVE-2020-0601? Is it smoke and mirrors after all?

Healey said the VEP “is a very reasonable process, rooted in sensible criteria of when to disclose a vulnerability or retain it.” I agree, because I helped write it. But it’s time for more answers from those who are in charge.

To make sure your organization is tracking best practices on cyber defense, check out:

Tagged: NSA zero-day
ooda_admin

About the Author

ooda_admin