Start your day with intelligence. Get The OODA Daily Pulse.

Maybe it is time to start thinking of ways to decontrol our AI.

I have been experimenting with using the word “decontrol” in AI related conversations in and around government. I use it in a way few have heard yet, as in “maybe it is time to start thinking of decontrol of AI.”

I generally get three responses to that term. Mission-focused tech leaders seem to get the rational for decontrol right away. There are offices and leaders today with responsibility to accelerate AI into their organizations, and the flood of government regulations is a drag on their work. For many leaders in this situation, the word decontrol means making their job of mission support easier.

Others react with curiosity. How can we be talking about decontrol when for the last decade everyone has been talking about the potential harms of AI? With those in this camp I underscore that I am not oblivious to the need of well engineered systems that mitigate risk, but since over regulation is slowing our use of AI it is time to start finding ways to loosen some controls.

A third category of person reacts with anger when I talk about decontrol of AI. These are generally the die-hard AI doomers who have bought into the thesis that AI poses such great risk of harm that it must be controlled with extreme measures. When asked if there is even one control on AI they would consider loosening up in any situation at all the answer is always a hard “no!” They want more control of AI, not less.

My lessons in inserting the word “decontrol” into AI conversations leads me to the following conclusions:

In government AI, like in other domains, there are Accels and Decels

  • Accels want to accelerate AI in service of government missions. They believe with a passion that new technologies can deliver value and that the biggest risk is not leveraging AI fast enough. Accels realize control of AI, like control of any technology, can be good and necessary, but bureaucratic over control is a waste and poses a big risk in itself.
  • Decels want to slow or even roll back the use of AI in service of government missions. Sometimes this is due to real concerns and fears. Some suspect it is also about control and the age old human tendency to think control leads to personal glory. To decels, the greatest risk of AI is that it might escape and be totally uncontrolled, and to them this risk is so significant it warrants slowing down progress no matter what the mission need.

The value of the word Decontrol is in starting a conversation

  • Using the term decontrol with accels can help start discussions on rules that are no longer needed or that should be rolled back. Accels know the reality of mission support and should be listened to when it comes to policy and action.
  • Using the term decontrol with decels might cause a visceral reaction at first, but if you keep a smile on your face and and really listen it can start a dialogue that brings to light their real concerns and fears.
  • Conversation around decontrol can open up pathways to consider a more balance approach to control that can help accelerate AI while mitigating the very real risks.
  • By asking for ideas on where decontrol is warranted, discussions can be had on specific fears to address and target with rational measures rather than blanket restrictions. This can lead to smarter regulation that reduces the biggest risk, the risk that we will stifle innovation and progress.

It is time for evaluation of AI regulations with decontrol in mind ?

If AI capabilities are evolving so fast, how can we assume we have our regulations and controls right? Shouldn’t we adopt the mindset that AI regulations should be continuously evaluated? What might have been a necessary control a year ago may be obsolete or overly restrictive tomorrow. Regular review processes, informed by technologists and by those charged with supporting the mission, are essential.

The next post in this series will provide more context on what the nature of AI regulations are today with an eye towards decontrol. We will discuss things that make good controls and give some examples of controls that may not be so relevant.

For the next in this series see: Regulations on Government Use of AI and then Decontrol AI to Accelerate Solutions.

Related References:

Bob Gourley

About the Author

Bob Gourley

Bob Gourley is an experienced Chief Technology Officer (CTO), Board Qualified Technical Executive (QTE), author and entrepreneur with extensive past performance in enterprise IT, corporate cybersecurity and data analytics. CTO of OODA LLC, a unique team of international experts which provide board advisory and cybersecurity consulting services. OODA publishes OODALoop.com. Bob has been an advisor to dozens of successful high tech startups and has conducted enterprise cybersecurity assessments for businesses in multiple sectors of the economy. He was a career Naval Intelligence Officer and is the former CTO of the Defense Intelligence Agency.