Start your day with intelligence. Get The OODA Daily Pulse.

The moral and ethical issues surrounding the exponential growth of machine learning  – and how they will evolve into a framework for the governance of machine learning systems and commercial applications  – are as challenging as any of the tough issues a business may face in this current climate of multiple simultaneous crises.  The baseline legal and business risks are at times difficult to discern when considering the deployment of AI.  How do you develop an AI system when taxonomies, mitigations, terminology, frameworks, and governance are evolving on a parallel track?  Arguably, the response to ChatGPT has thrown fuel on this fire  – and we are really winging it at this point.

The AI systems development environment is currently ripe with unknowns and uncertainty. Strategic discipline is as a result, and more than ever, a requirement of management and leadership.  Potential risks and threats cannot be ignored and have to be acknowledged, discussed, and somehow (quantitatively or qualitatively) structured into decision intelligence and risk awareness efforts.  For insights into the challenges ahead, use cases that are already in full deployment are highly instructional.  It is also helpful if the case study is large-scale, well-funded, and devoid of commercial limitations and restrictions. More often than not, these ‘extreme sampling’ use cases allow for insights that transcend industry sectors and are broadly applicable.

The following Air Force and Space Force projects are a window into the potential of generative AI in the governance of commercial space and the use of automation to improve the security compliance process in software development.  This mandate to “secure the software used to handle space data more efficiently” is also illustrative of the “Secure by Design” approach advocated by CISA’s Jen Easterly and the recently released  2023 National Cybersecurity Strategy.  Both projects exemplify “how the military and industry partners are using innovative technologies to increase the efficiency of certain critical processes.” (1)

The “Speed of Need”:  Generative AI to Identify Space Objects

AFWERX, a Technology Directorate of the Air Force Research Laboratory (AFRL) will work with “Synthetaic, a company using synthetic data to train artificial intelligence models when training data is limited…to identify objects of interest quickly and accurately.”  As reported by Mila Jasper at NextGov in 2021:

“Corey Jaskolski, Synthetaic president and founder, told Nextgov in a recent interview the current process for doing geospatial labeling is often a months-long, labor-intensive process. Labeling had to be done by hand before AI models could be built to then do the work to identify objects of interest. That’s especially troublesome given that sophisticated AI models depend on large amounts of data, while many objects that analysts would want to detect are rare.

To get around this problem at speed, Synthetaic uses a technique called generative AI to essentially use existing data to propagate new data, allowing for high-performance detection without massive, existing datasets. Synthetaic’s tool, called Rapid Automatic Image Categorization, builds, runs, and gets results from AI “in seconds.”

RAIC ‘allows us to run AI models at a much faster speed that in this case, runs at a sort of what we call ‘the speed of need’ for the Department of Defense and the warfighter,’ Jaskolski said.

The Space Force “Secure by Design” Software Development Process

Anchore, a container security company, will work with Kobayashi Maru, the Space Force’s software factory, to shift security compliance earlier into the software development process.

…Anchore, which also works with the Air Force’s Platform One program, will be figuring out how to help Kobayashi Maru shift as many of the Security Technical Implementation Guide, or STIG, requirements as possible earlier into the development process.

‘So the idea of the work that we’re undertaking is to really take these STIG tests, and look at what can be applied, much earlier in the development cycle,’ Anchore’s Paul Holt said in an interview. “Obviously, primarily to drive out the cost of remediation, but also to keep the speed of innovation going, because in all of these cases, it’s about getting software, getting innovation into the hands of the warfighter … as quickly as possible.”

Anchore and Space Force will work together to identify the STIGs, which Holt called “heavyweight” compliance measures, with which they need to comply, and then conduct testing to see where they can fit within development pipelines. Baking security into the development process is the driving idea behind DevSecOps.

‘With a program of this magnitude, it is critical that the software handling space data continues to be secure and protected, using the latest technology and techniques available in cybersecurity,’ Anchore Chief Technology Officer Dan Nurmi said in a statement. ‘Adding enforcement of security compliance standards earlier into the software development cycle means that violations can be detected and addressed as they arise and are resolved quickly, resulting in more efficient and comprehensive security enforcement across the development lifecycle.'” (1)

https://oodaloop.com/archive/2019/02/27/securing-ai-four-areas-to-focus-on-right-now/

https://oodaloop.com/archive/2022/09/22/mergeflow-ceo-and-ooda-network-member-florian-wolf-on-small-data-part-1-of-2/

https://oodaloop.com/archive/2023/03/09/reducing-the-risk-of-the-exponential-growth-of-automated-influence-operations/

Solutions for impossible AI use cases (synthetaic.com)

 

Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.