Start your day with intelligence. Get The OODA Daily Pulse.
Maybe it is time to start thinking of ways to decontrol our AI.
I have been experimenting with using the word “decontrol” in AI related conversations in and around government. I use it in a way few have heard yet, as in “maybe it is time to start thinking of decontrol of AI.”
I generally get three responses to that term. Mission-focused tech leaders seem to get the rational for decontrol right away. There are offices and leaders today with responsibility to accelerate AI into their organizations, and the flood of government regulations is a drag on their work. For many leaders in this situation, the word decontrol means making their job of mission support easier.
Others react with curiosity. How can we be talking about decontrol when for the last decade everyone has been talking about the potential harms of AI? With those in this camp I underscore that I am not oblivious to the need of well engineered systems that mitigate risk, but since over regulation is slowing our use of AI it is time to start finding ways to loosen some controls.
A third category of person reacts with anger when I talk about decontrol of AI. These are generally the die-hard AI doomers who have bought into the thesis that AI poses such great risk of harm that it must be controlled with extreme measures. When asked if there is even one control on AI they would consider loosening up in any situation at all the answer is always a hard “no!” They want more control of AI, not less.
My lessons in inserting the word “decontrol” into AI conversations leads me to the following conclusions:
In government AI, like in other domains, there are Accels and Decels
The value of the word Decontrol is in starting a conversation
It is time for evaluation of AI regulations with decontrol in mind ?
If AI capabilities are evolving so fast, how can we assume we have our regulations and controls right? Shouldn’t we adopt the mindset that AI regulations should be continuously evaluated? What might have been a necessary control a year ago may be obsolete or overly restrictive tomorrow. Regular review processes, informed by technologists and by those charged with supporting the mission, are essential.
The next post in this series will provide more context on what the nature of AI regulations are today with an eye towards decontrol. We will discuss things that make good controls and give some examples of controls that may not be so relevant.
For the next in this series see: Regulations on Government Use of AI and then Decontrol AI to Accelerate Solutions.
Related References: