Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > The Problem With Solutions To Cyber Threat Detection

In the previous post in this series, we discussed the importance of problem comprehension and its role in problem-solving. In short, part one highlighted that you can’t solve a problem you don’t understand. The paradox is that you won’t completely understand most problems. While problem-solving begins with a problem, problem framing does not happen once. Problem framing continuously evolves until you discover the right solution. This process does not operate in the opposite direction. It is almost inconceivable to have the right solution and develop a problem until it is perfect.

This article shares more about why threat detection is hard, even for machine learning. Part one suggested that machine learning improves problem-solving, which traditional technology does not promise. However, we will see that while machine learning is good––at times great––for many problems, it is a perfect solution for almost nothing.

1: Machine learning is not perfect because it produces probabilistic outcomes. It can’t be perfect. Probabilistic outcomes are both a bug and a feature. It is a feature because probabilistic outcomes are more robust than the inflexible traditional security paradigm of rules and signatures. However, it is a bug because probabilistic outcomes produce false positives, false negatives, and inconsistent value propositions for users.

Many real-world applications still use expert systems and knowledge bases in the background of machine learning solutions for stability. Machine learning has not completely obviated rules, signatures, or threat intelligence in cybersecurity.

Using non-statistical solutions helps projects get off the ground and out the door faster by providing customers with stable features, value propositions, and performance. This strategy is also crucial because you never want to solve something that could be solved with a rule with something complex like machine learning. One of the biggest traps “AI” projects fall into is demanding an entire solution fit into the machine learning paradigm. These projects insist on learning everything from the data and discount any a priori knowledge of the problem. These projects struggle because we know the most about the simplest things. Consequently, we create unstable solutions on purpose by using the most complex solution for the most trivial aspects of a problem. This strategy of solving the most accessible parts of a problem with the most complicated solution is how to make the most expensive, least stable solution, not the best solution.

2: Machine learning is good at classifying noisy inputs based on known situations. These known situations do not need to be described in painstaking details like they would with traditional software development. Data can represent those “situations.” However, almost none of machine learning excels at classifying data outside of the known problem space represented by data. Machine learning learns to fit known data as closely as possible locally, regardless of how it performs outside of these situations. This means machine learning is good at interpolation but poor at extrapolation. Despite all the buzz about “AI,” these solutions do not represent natural intelligence. They will not solve other problems like humans. Each is a narrow, statistical approximation of a problem that interpolates signals.

This all means that you will need to frame different problems distinctly and find features for learning that do not overlap. The less distinct one problem is from another, the less effective your solutions will be in practice. For example, threat detection includes dozens of threat vectors. Therefore, you will be unable to solve all these problems with one solution. The best you can hope for is to find optimal partial solutions for each threat vector and combine them into one solution. Tesla uses dozens of machine learning algorithms and components to recognize narrow, well-defined objects. There is a partial solution for stop sign recognition and others for lane markers, and more for identifying other vehicles. At Cybraics, we have taken a similar approach. Cybraics’ solution uses a kind of meta-algorithm for distributed learning over the whole cybersecurity problem. This approach includes dozens of orchestrated learning algorithms. Such a strategy is essential for iterative and adaptive computations that ultimately show problem-solving as dispersed and continuous.

3: Another challenge for machine learning is sensitivity to a changing problem. When the relationship between input and output data in the underlying problem changes, machine learning will become obsolete over time. This is cause for diligent monitoring but also making a solution narrow. A smaller, well-specified solution will control variability more than a single, and underspecified solution.

4: Machine learning is also sensitive to context or the background distribution of data. This is more challenging than the previous example.

Consider the problem of detecting defective potatoes on a conveyor belt. This is a real problem. However, it is an easier problem than threat detection. The reason is that the context of the potato never changes. The problem always includes a potato on a belt. We can assume that any problem in which one object moves at a slow fixed rate in one direction on a conveyor belt with no changes in context is simple. Too bad threat detection does not exist on a conveyor belt.

Complex problems are complex due to changes in object occurrence, but they’re further complicated by the context surrounding the objects. Consider self-driving cars where both the objects and the context of those objects change. Bridges are often considered black boxes for autonomous vehicles because bridges lack many environmental cues on roads that can prevent sensors from keeping the car on the road. While object occurrence and environment changes don’t fool humans, they fool solutions that learn from data.

Consider cybersecurity, and specifically malware detection, in executables. Although there are many types of malware and countless variants, it is easier to detect than insider threat detection. An insider threat is a malicious threat to an organization that comes from people within the organization (hence insider threat), such as employees, former employees, contractors, or business associates. The context of malware detection in executables is static, existing in a hash or the binary patterns in an executable. But because executables have not changed much in thirty years, we know where to look.

The context of an insider threat—specifically—and the detection of aberrant behaviors—generally—is different because the baseline behavior is different from user to user, organization to organization, and different over time. When everything is changing, the problem is more complex.

This list is hardly comprehensive. The data available for threat detection is generally weak. Enterprise networks are generally sloppy. Distinguishing between aberrant and anomalous behavior is difficult. And talent shortages exist in both cyber and data science. Let me know why you think threat detection is so hard.

Related Reading:

Explore OODA Research and Analysis

Use OODA Loop to improve your decision making in any competitive endeavor. Explore OODA Loop

Decision Intelligence

The greatest determinant of your success will be the quality of your decisions. We examine frameworks for understanding and reducing risk while enabling opportunities. Topics include Black Swans, Gray Rhinos, Foresight, Strategy, Stratigames, Business Intelligence and Intelligent Enterprises. Leadership in the modern age is also a key topic in this domain. Explore Decision Intelligence

Disruptive/Exponential Technology

We track the rapidly changing world of technology with a focus on what leaders need to know to improve decision-making. The future of tech is being created now and we provide insights that enable optimized action based on the future of tech. We provide deep insights into Artificial Intelligence, Machine Learning, Cloud Computing, Quantum Computing, Security Technology, Space Technology. Explore Disruptive/Exponential Tech

Security and Resiliency

Security and resiliency topics include geopolitical and cyber risk, cyber conflict, cyber diplomacy, cybersecurity, nation state conflict, non-nation state conflict, global health, international crime, supply chain and terrorism. Explore Security and Resiliency

Community

The OODA community includes a broad group of decision-makers, analysts, entrepreneurs, government leaders and tech creators. Interact with and learn from your peers via online monthly meetings, OODA Salons, the OODAcast, in-person conferences and an online forum. For the most sensitive discussions interact with executive leaders via a closed Wickr channel. The community also has access to a member only video library. Explore The OODA Community

Tagged: cyber threat
Rich Heimann

About the Author

Rich Heimann

Rich Heimann is Chief AI Officer at Cybraics Inc, a fully managed cybersecurity company. Founded in 2014, Cybraics operationalized many years of cybersecurity and machine learning research conducted at the Defense Advanced Research Projects Agency. Rich is also the author of Doing AI, a book exploring what AI is, is not, what others want AI to become, what you need solutions to be, and how to approach problem-solving. Please find out more about his book here.