Start your day with intelligence. Get The OODA Daily Pulse.
I’ve been a red teamer for twenty years now, perhaps even longer, but I didn’t know what to call it until 1995 when I started working with the Department of Defense. I’ve also been fortunate to participate in or lead hundreds of red teams within many divergent disciplines ranging from strategic and tactical cyber to physical security threats like infectious diseases or nuclear power plant targeting to more abstract items like Joint Operating Concepts.
Over those 20 years, I’ve had the opportunity to work with some of red teaming’s greatest minds like General Van Riper, Jim Miller, Mark Mateski, Neal Pollard, Brian Jenkins, Jeff Cooper, Steve Lukasik, Robert Garigue, Jason Healey, John Sullivan, Robert Bunker, and John Schmitt as well as incredible technologists like Bob Stratton, Chris Goggans, Tom Parker, Sean Malone, Bob Gourley, Jeff Moss, and others.
I often get asked what lessons I’ve learned over the past twenty years, so I started putting together this list of 10 lessons learned over 20 years of red teaming a few years ago. Given that I’ve officially hit the twenty year mark, I figured it was time to hit the publish button. While many of these feel like concepts, vice lessons learned, I hope the reader finds them thought provoking as they formulate and execute red teams of their own. As always, feedback and comments are welcome.
Interested in more insights? Every week I put out a newsletter with the top 7-10 stories that should be in your decision loop. You can subscribe at WWW.GLOBALFREQUENCY.COM.
The adage that the Jedi want to bring balance to the force is a farce. Adversaries, competitors, and other actors/entities never seek balance. They seek asymmetry. Over time, I’ve come to recognize that order does not equate with balance, and the scales are never equally weighted regardless of whether we are talking international relations, economics, or societal frameworks like civil liberties versus security.
Asymmetry is a key objective of the red teamer. This point was highlighted for me with Michael Moore’s presentation at the Boyd & Beyond conference when he advocated that the concept of Yin and Yang is a lie. We always seek advantage, or at least less disadvantage, and that needs to be the guiding ethos of your red team. Out smart, out play, and be driven to actually win something. You can drive asymmetry through overwhelming force, technology or tactical surprise, attacks of disproportionality, or long-term strategy.
Operating within a faster OODA Loop than your adversary is the core precept of the OODA Loop concept itself. The fighter pilot able to complete the OODA Loop will have the advantage as Boyd did with his 40 second wins. However, we need to look beyond just completing the OODA Loop quicker and acknowledge that in some situations the OODA Loop might be compressed to just Observe – Act. This is especially true when red teaming in the cyber domain. What can you do to force your adversary to compress or truncate their OODA Loop and if they do, how you can you take advantage of it? This goes beyond surprise or deception, but can also be achieved through exploitation of procedures or other constructs narrowing your adversaries response options or their ability to respond in the first place. The red team’s agility and ability to operate within compressed OODA Loops can be a tactical enabler of the red team’s success.
I learned early on working with red teaming greats like General Van Riper not to accept artificial constraints. The minute you constrain the red team, you’ve officially moved outside the realm of red teaming an into the realm of experimentation. If the blue forces try to impose artificial constraints ask them if they are also able to impose those constraints on the real-world adversary. Real attackers don’t steer away from operational systems or restrict their activity to business off-hours. Want to use mylar balloons and aluminum foil to make your red team jeeps look like tanks? Does the adversary have access to mylar balloons and foil? Fair game.
I was once asked to describe the methodology used by the FusionX red team and I related our approach to a scene in the movie the Matrix. In the movie a young child bends a spoon using only his mind. When our hero Neo attempts the same feat, the child prodigy notes that the secret is to remember that there is no spoon. “Do not try and bend the spoon, that’s impossible. Instead, only try to realize the truth…there is no spoon. Then you will see it is not the spoon that bends, it is only yourself.”
A methodology is an artificial constraint on the red team. To truly red team, you need to unleash the creativity and ingenuity of the experts on the team. The key is not to think outside the box, but to think without the box.
A successful red team can articulate their results in a way that brings context to the red team’s sponsor and supports their decision making process. Your red team briefing should have a valid threat/competitor model, an attack narrative with contextual outcomes, and the value proposition for the attacker and the defender. I’ve especially found this to be true when briefing red team results to executives and boards of directors.
In the mid-90’s I had the honor to create and run a Coalition Vulnerability Assessment Team that worked as a red team during classified military exercises. Our team had the best tools developed by the Five Eyes and the commercial sector at our disposal. However, it quickly became apparent that even the most sophisticated tools were not a replacement for talent. In fact, when Robert Garigue and I put together our hot wash briefing, one of our key findings was simply articulated as “Tools are NOT Talent”. Tools are an essential enabler for a red team, but they do not make the red team. If you want a tool-driven red team, focus your R&D staff on creating tools based upon red team requirements, not vice versa.
When an IT security organization tells us that they red team by running Nessus or Metasploit, we’d often ask “what nation state does Nessus represent?” Tools are intent agnostic. An adversary is not. Tools treat all systems as equal. An adversary does not. There is great value in pro-actively probing your network with available tools, but they are not a replacement for a real human-led red team. By that same manner, there is great value to exercises conducted with structured injects, but a real red team takes place in real-time and is unscripted.
Jason Healey first articulated this concept years ago to focus our attention away from the noise on the wire and back on the living, breathing, human adversary on the other side. You can’t think of your adversary only in the context of the technical attack that manifests itself, but rather in the context of their human behavior. A good red team will re-enforce this fact for the blue team.
Previous efforts to articulate this concept were flawed until the movie World War Z did it so simply and brilliantly with the 10th man rule:
“If nine intelligence analysts came to the same conclusion, it was the duty of the tenth to disagree. No matter how unlikely or far-fetched a possibility might be, one must always dig deeper.”
As humans we are fundamentally flawed towards consensus, hive mind, and an inherent desire to believe the lie. A good red teamer has to break outside the chains of conception and imagine the unimaginable and see whether the unimaginable can manifest itself as red action. Here the 10th man rule has great value, not in requiring the 10th man to automatically dissent, but also as a mental exercise to expand the potential of the red team. As a red teamer your job is to help prevent failures of imagination.
I once spent 45 minutes backing two of my red teamers who were engaged in a heated discussion with a customer over a request to change a single word in a red team report. We never backed down as the word carried true value and context for the decision maker. A red team should never compromise the integrity of their results to satisfy the red team sponsor. True value comes from speaking truth to power which often means articulating unpopular findings or diverging from the status quo or common conceptions.
Comments. Questions? Tweet @mattdevost