Start your day with intelligence. Get The OODA Daily Pulse.

Home > Briefs > Malware Analysis: The Danger of Connecting the Dots

Malware Analysis: The Danger of Connecting the Dots

The findings and conclusions of malware “analysis” are not in fact analysis; they are, however, a collection of data points linked together by assumptions whose validity and credibility have not been evaluated. This lack of analytic methodology could prove exceedingly problematic for those charged with making decisions about cyber security. If you cannot trust your analysis, how are you supposed to make sound cyber security decisions?

Question: If I give you a malware binary to reverse engineer, what do you see? Think about your answer for a minute and then read on. We’ll revisit this shortly.

It is accepted as conventional wisdom that Stuxnet is related to Duqu, which is in turn related to Flame. All of these malware have been described as “sophisticated” and “advanced,” so much so that they must be the work of a nation-state (such work presumably requiring large amounts of time, many skilled people, and the code written for purposes beyond simply siphoning off other people’s cash). The claim that the US government is behind Stuxnet has consequently led people to assume that all related code is US sponsored, funded, or otherwise backed.

Except for the claim of authorship, all of the aforementioned data points come from people who reverse engineer malware binaries. These are technically smart people who practice an arcane and difficult art. However, what credibility does this technical skill give them beyond their domain? In our quest for answers, do we give too much weight to the conclusions of those with discrete technical expertise and fail to approach the problem with sufficient depth and objectivity?

Let’s take each of these claims in turn.

Are there similarities, if not outright sharing, between codes in Stuxnet, Duqu and Flame? Yes. Does that mean the same people wrote them all? Do you believe there is a global marketplace where malware is created and sold? Do you believe the people who operate in that marketplace collaborate? Do you believe that the principle of “code reuse” is alive and well? If you answered “yes” to any of these questions, then a single source of “advanced” malware cannot be your only valid conclusion.

Is the code in Stuxnet, etc. “sophisticated?” Define “sophisticated” in the context of malware. Forget about malware and try to define “sophisticated” in the context of software, period. Is Excel more sophisticated than Photoshop? When words have no hard and widely-accepted definitions, they can mean whatever you want them to mean, which means they have no meaning at all.

Can only a nation-state produce such code? How many government-funded software projects are you aware of that work as advertised? You can probably count on one hand and have fingers left over. But now, somehow, when it comes to malware, suddenly we are to believe that the government has gotten its shit together?

“But Mike, these are, like, weapons. Super secret stuff. The government is really good at that.”

Really? Have you ever heard of the Osprey? Or the F-35? Or the Crusader? Or the JTRS? Or Land Warrior? Groundbreaker? Trailblazer? Virtual Case File?

I’m not trying to trivialize the issues associated with large and complex technology projects, my point is that a government program to build malware would be subject to the same issues and consequently no better – and quite possibly worse – than any non-governmental effort to do the same thing. Cyber crime statistics, inflated though they may be, tell us that governments are not the only entities that can and do fund malware development.

“But Mike, the government contracts out most of its technology work. Why couldn’t they contract out the building of digital weapons?” They very well could, but then what does that tell us? It tells us that if you wanted to build the best malware you have to go on the open market (read: people who may not care who they’re working for as long as their money is good).

Some have gone so far to claim that the US government “admitted” that they were behind Stuxnet. It did no such thing. A reporter and author says that a government official told him that the US was behind Stuxnet. Neither the President of the United States, nor the Secretary of Defense, nor the Directors of the CIA, nor NSA representatives got up in front of a camera and said, “That’s us!” That would be an admission. Let me reiterate: a guy who has a political agenda told a guy who wants to sell books that the US was behind Stuxnet.

It’s easy to believe the US is behind Stuxnet as much as it is to believe Israel is behind it. You know who else does not like countries who do not have nuclear weapons to get them? Almost every country in the world, including those countries that currently have nuclear weapons. You know who else might not want Iran – a majority Shia country – to have an atomic bomb? Roughly 30 Sunni countries for starters, most of which could afford to go onto the previously mentioned open market and pay for malware development. What? You had not thought about the non-proliferation treaty or that Sunni-Shia thing? Yeah, neither has anyone working for Kaspersky, Symantec, F-Secure, etc., etc.

Back to the question I asked earlier: What do you see when you reverse engineer a binary?

Answer: Exactly what the author wants you to see.

I want you to see words in a language that would throw suspicion on someone else. I want you to see that my code was compiled in a particular foreign language (even though I only read and/or write in a totally different language). I want you to see certain comments or coding styles that are the same or similar to someone else’s (because I reuse other people’s code). I want you to see data about compilation date/time, PDB file path, etc., which could lead you to draw erroneous conclusions have no bearing on malware behavior or capability.

Contrary to post-9/11 conventional wisdom, good analysis is not dot-connecting. That is part of the process but it is not the whole or only process. Good analysis has one or more proper methodologies behind it, as well as a fair dose of experience or exposure to other disciplines that come into play. Most important of all is that there are often multiple, verifiable, and meaningful data points to help back up assertions. Let me give you an example.

I used to work with a guy we’ll call “Luke.” Luke was a firm believer in the value of a given type of data. He thought it was infallible. So strong were Luke’s convictions about the findings he produced using only this particular type of data that he would draw conclusions about the world that flew in the face of what the rest of us like to call “reality.” If Luke’s assertions were true, world war III would have been triggered, but as many, many other sources of data were able to point out, Luke was wrong. There was a reason why Luke was the oldest junior analyst in the whole department.

There are a number of problems, fallacies, and mental traps from which people tend to suffer when they attempt to draw conclusions from data. This is not an exhaustive list but helps to illustrative my point:

Focus Isn’t All That: There is a misconception that narrow and intense focus leads to better conclusions. The opposite tends to be true. That is, the more you focus on a specific problem, the less likely you are to think clearly and objectively. Because you just “know” certain things are true, you feel comfortable taking shortcuts to reach your conclusion, which in turn simply drives you further away from the truth.

I’ve Seen This Before: We give too much credence to patterns. When you see the same or very similar events taking place or tactics used your natural reaction is to assume that what is happening now is what happened in the past. You discount other options because its “history repeating itself.”

The Shoehorn Effect We don’t like questions that don’t have answers. Everything has to have an explanation, regardless of whether or not the explanation is actually true. When you cannot come up with an explanation that makes sense to you, you will fit the answer to match the question.

Predisposition: We allow our biases to drive us to seek out data that supports our conclusions and discount data that refutes it.

Emption: You cannot discount the emotional element involved in drawing conclusions, especially if your reputation is riding on the result. Emotions about a given decision can run so high that it overcomes your ability to think clearly. Rationalism goes out the window when your gut (or your greed) overrides your brain.

How can we overcome the aforementioned flaws? There are a range of methodologies that analysts use to improve objectivity and criticality. These are by no means exhaustive, but they give you an idea of the kind of effort that goes into serious analytic efforts.

Weighted Ranking: It may not seem obvious to you, but when presented with two or more choices, you choose X over Y based on the merits of X, Y (and/or Z). Ranking is instinctual and therefore often unconscious. The problem with most informal efforts at ranking is that its one-dimensional.

“Why do you like the TV show Homicide and not Dragnet?”

“Well, I like cop shows but I don’t like black-and-white shows.”

“OK, you realize those are two different things you’re comparing?”

A proper ranking means you’re comparing one thing against another using the same criteria. Using our example, you could compare TV shows based on genre, sub-genre, country of origin, actors, etc., rank them according to preference in each category, and only then tally the results. Do this with TV shows or any problem and you’ll see that your initial, instinctive results will be quite different than those of your weighted rankings.

Hypothesis Testing: You assert the truth of your hypothesis through supporting evidence, but you are always working with incomplete or questionable data, so you can never prove a hypothesis true. It is accepted as true until evidence surfaces that suggest it to be false (see bias note above). Information becomes evidence when it is linked to a hypothesis and evidence is valid once we’ve subjected it to questioning: where did the information come from? How plausible is it? How reliable is it?Answer these questions before moving forward from any claim.

Devil’s Advocacy: Taking a position contrary to the accepted answer helps overcome biases and one-dimensional thinking. Devil’s advocacy seeks out new evidence to refute “what everybody knows,” including evidence that was disregarded by those who take the prevailing point of view.

This leads me to another point I alluded to earlier and that is not addressed in media coverage of malware analysis. What qualifications does your average reverse engineer have when it comes to drawing conclusions about geo-political-security issues? You don’t call a plumber to fix your fuse box. You don’t ask a diplomat about the latest developments in no-till farming. Why in the world would you take at face value what a reverse engineer says about anything except very specific, technical findings? I’m not saying people are not entitled to their opinions, but credibility counts if those opinions are going to have value.

So where are we and where does this leave us?

-There are no set or even widely accepted definitions related to malware (e.g. what is “sophisticated” or “advanced”).

-There is no widely understood or accepted baseline of what sort of technical, intellectual, or actual-capital required to build malware.

-Data you get out of code, through reverse engineering or from source, is not guaranteed to be accurate when it comes to issues of authorship or origin.

-Malware analysts do not apply any analytic methodology in an attempt to confirm or refute their single-source findings.

-Efforts to link data found in code to larger issues of geo-political importance are, at best, superficial.

Why is all of this important?

Computer security issues are becoming an increasingly important factor in our lives. Not in that it is becoming universally appreciated,  but that it controls and modifies nearly every aspect of governmental and personal life. Look at where we have been and where we are headed. Just under 20 years ago, few people in the US, much less the world, were online. Now, more people in the world access the internet through their phones than do on a traditional computer. Cars use computers to drive themselves and biological implants are controlled via Bluetooth. Neither of these new developments has meaningful security features built into them but no one would ever be interested in hacking insulin pumps or pacemakers, right?

Taking computer security threats seriously starts by putting serious thought and effort behind our research and conclusions. The government does not provide information like this to the public so we rely on vendors and security companies (whose primary interest is profit) to do it for us. When that “analysis,” which is far from rigorous, is delivered to decision-makers who are accustomed to dealing with conclusions that have been developed through a much more robust methodology, their decisions can have far-reaching negative consequences.

Sometimes a quick-and-dirty analysis is proper. A quick but non-exhaustive malware analysis can help in certain circumstances. If you’re planning on making serious decisions about the threat you face from cyberspace, however, you should really take the time and effort to ensure that your analysis has looked beyond what IDA shows and considered more diverse and far-reaching factors.