Start your day with intelligence. Get The OODA Daily Pulse.
In a previous post we asked “Is the US Government Over-Regulating Artificial Intelligence?” We followed it with some historical context and a summary of today’s government controls in “Regulations on Government Use of AI.” This post builds on those two to examine an example on how a decontrol mindset can help speed AI solutions into use.
The payoff of AI applied to government missions is just beginning and already there is a long list of successful applications. The government has pioneered the application of now old fashioned AI approaches like expert systems and machine learning and produced systems that have succeeded in ways traditional IT could not have. Some successes recently highlighted by the OMB include:
There are so many others, but you get the point. The fact is the government has many use cases for old school AI and will have many more for the new Generative AI capabilities now sweeping the nation.
The exciting thing about these Generative AI capabilities, the thing that makes them significantly different than old fashioned AI like expert systems and machine learning, is that it is not just a technology. It is a profound shift between old and new. For the first time in the co-evolution of humans and technology we are able to elevate our tools to be teammates. Generative AI will change how we produce products and services and how governments serve citizens and support missions. Generative AI uses many previous AI techniques including machine learning but it builds on them to enable generation of new text, audio, images and video. It can create things that never existed before.
New solutions that leverage Generative AI are already available in many technology platforms available to government. I’m thinking specifically about Palantir, Recorded Future and Vectara. There are also many SaaS platforms (like OpenAI, Mistral, Anthropic, Inflection AI, Stable Diffusion, MidJourney). Big players like Google, Microsoft, Amazon and IBM also have Generative AI offerings. I have also seen Generative AI leveraged by dozens of cybersecurity firms to improve their capabilities. Generative AI is coming at us fast.
But by their nature, Generative AI has problems meeting the requirements outlined in the NIST RMF for AI. Criteria systems should be evaluated on include:
There are solid definitions of each of these in the AI RMF, and there is little argument that these are the criteria you should evaluate to assess the trustworthiness of an AI system.
But it can be very hard to engineer GenerativeAI solutions that score well in those criteria.
Now consider what a suite of Generative AI systems could do to serve an organization with responsibilities for analyzing complex dynamic situations, like those common in intelligence community and law enforcement and cybersecurity situations. Analysts should have access to many AI tools including many based on Generative AI. But what if those Generative AI tools serve the analysts by providing assessments but cannot explain how they come to conclusions? This would be a fail to meet the criteria of the AI RMF and may well be cause for an agency AI officer to shut the program down since it does not meet the control.
But what if the organization approaches this with a different attitude, one where it is ok to find ways to not use every control? The approach in this case could be to let the Generative AI tool help the analysts assess the dynamic situation, but ensure that any conclusions are reviewed by the human before they are used. Using an approach like this can enable GenerativeAI applications that score very low on multiple criteria to deliver results.
If approaches like this are now allowed, the only generative AI tools that will be available to analysts are those that have been rigorously engineered to reduce hallucination, are based on known open source models, trained on known data, and able to show exactly how results are determined every time. Some tools will no doubt require this level of engineering. But most need not.
Leveraging this approach requires understanding use cases to a fidelity well enough that the right solutions can be picked for the right use case. It also requires an ability to exercise judgement and understand when to decontrol to accelerate AI solutions.
Related References: