Start your day with intelligence. Get The OODA Daily Pulse.

In a previous post we asked “Is the US Government Over-Regulating Artificial Intelligence?” We followed it with some historical context and a summary of today’s government controls in “Regulations on Government Use of AI.” This post builds on those two to examine an example on how a decontrol mindset can help speed AI solutions into use.

The payoff of AI applied to government missions is just beginning and already there is a long list of successful applications. The government has pioneered the application of now old fashioned AI approaches like expert systems and machine learning and produced systems that have succeeded in ways traditional IT could not have. Some successes recently highlighted by the OMB include:

  • Department of Health and Human Services, where AI is used to predict infectious diseases and assist in preparing for potential pandemics, as well as anticipate and mitigate prescription drug shortages and supply chain issues. 
  • Department of Energy, where AI is used to predict natural disasters and preemptively prepare for recoveries.
  • Department of Commerce, where AI is used to provide timely and actionable notifications to keep people safe from severe weather events.
  • National Aeronautics and Space Administration, where AI is used to assist in the monitoring of Earth’s environment, which aids in safe execution of mission-planning.
  • Department of Homeland Security, where AI is used to assist cyber forensic specialists to detect anomalies and potential threats in federal civilian networks.

There are so many others, but you get the point. The fact is the government has many use cases for old school AI and will have many more for the new Generative AI capabilities now sweeping the nation.

The exciting thing about these Generative AI capabilities, the thing that makes them significantly different than old fashioned AI like expert systems and machine learning, is that it is not just a technology. It is a profound shift between old and new. For the first time in the co-evolution of humans and technology we are able to elevate our tools to be teammates. Generative AI will change how we produce products and services and how governments serve citizens and support missions. Generative AI uses many previous AI techniques including machine learning but it builds on them to enable generation of new text, audio, images and video. It can create things that never existed before.

New solutions that leverage Generative AI are already available in many technology platforms available to government. I’m thinking specifically about Palantir, Recorded Future and Vectara. There are also many SaaS platforms (like OpenAI, Mistral, Anthropic, Inflection AI, Stable Diffusion, MidJourney). Big players like Google, Microsoft, Amazon and IBM also have Generative AI offerings. I have also seen Generative AI leveraged by dozens of cybersecurity firms to improve their capabilities. Generative AI is coming at us fast.

But by their nature, Generative AI has problems meeting the requirements outlined in the NIST RMF for AI. Criteria systems should be evaluated on include:

  • Validity and reliability
  • Safety
  • Security and resilience
  • Accountability and transparency
  • Explainability and interpretability
  • Privacy enhanced
  • Fair and non-biased

There are solid definitions of each of these in the AI RMF, and there is little argument that these are the criteria you should evaluate to assess the trustworthiness of an AI system.

But it can be very hard to engineer GenerativeAI solutions that score well in those criteria.

Now consider what a suite of Generative AI systems could do to serve an organization with responsibilities for analyzing complex dynamic situations, like those common in intelligence community and law enforcement and cybersecurity situations. Analysts should have access to many AI tools including many based on Generative AI. But what if those Generative AI tools serve the analysts by providing assessments but cannot explain how they come to conclusions? This would be a fail to meet the criteria of the AI RMF and may well be cause for an agency AI officer to shut the program down since it does not meet the control.

But what if the organization approaches this with a different attitude, one where it is ok to find ways to not use every control? The approach in this case could be to let the Generative AI tool help the analysts assess the dynamic situation, but ensure that any conclusions are reviewed by the human before they are used. Using an approach like this can enable GenerativeAI applications that score very low on multiple criteria to deliver results.

If approaches like this are now allowed, the only generative AI tools that will be available to analysts are those that have been rigorously engineered to reduce hallucination, are based on known open source models, trained on known data, and able to show exactly how results are determined every time. Some tools will no doubt require this level of engineering. But most need not.

Leveraging this approach requires understanding use cases to a fidelity well enough that the right solutions can be picked for the right use case. It also requires an ability to exercise judgement and understand when to decontrol to accelerate AI solutions.

Related References:

Bob Gourley

About the Author

Bob Gourley

Bob Gourley is an experienced Chief Technology Officer (CTO), Board Qualified Technical Executive (QTE), author and entrepreneur with extensive past performance in enterprise IT, corporate cybersecurity and data analytics. CTO of OODA LLC, a unique team of international experts which provide board advisory and cybersecurity consulting services. OODA publishes OODALoop.com. Bob has been an advisor to dozens of successful high tech startups and has conducted enterprise cybersecurity assessments for businesses in multiple sectors of the economy. He was a career Naval Intelligence Officer and is the former CTO of the Defense Intelligence Agency.