Start your day with intelligence. Get The OODA Daily Pulse.
In a previous post in this series we raised a question: Is the US Government Over-Regulating AI? That post discussed the word “decontrol” as a concept whose time has come in discussion of AI use in the government.
But how can we know if we are over regulated without an overview of the regulatory environment? Here we provide a short overview of the regulation of government IT with an eye towards shedding light on ways to accelerate AI through some decontrol.
The History of Government Technology Control
A useful starting point in tracking how the government controls technology is the 1970 Ware Report, formally known as as the “Security Controls for Computer Systems: Report of the Defense Science Board Task Force on Computer Security.” During this era, technology controls were heavily influenced by the Cold War. Protection of data was the focus. The second major driver of technology controls in this time were the civil rights movement and the widespread recognition that technology could be implemented in unfair ways or could results in threats to privacy. Both these two drivers, national security and Civil Rights, remain key drivers in our controls today.
In 2002 the government became more focused on the smart use of technology to deliver citizen services and to support national security missions and NIST began a new focus on production of smart controls over government technology. This was the year congress passed the E-Government Act including a provision called FISMA (the Federal Information Security Management Act). It requires control over IT by:
During the Obama administration a new focus on rules for the smart use of AI resulted in an acceleration of controls for federally funded AI research. During the Trump administration new technical standards for government use of AI were promulgated in an Executive Order on “Maintaining American Leadership in Artificial Intelligence” which launched the American AI initiative. The Biden Administration has issued several major guidance documents and initiatives starting with a “Blueprint for an AI Bill of Rights: Making Automated System Work for the American People” and and EO on rooting out bias in AI. The latest Executive Order on AI (signed 30 Oct 2023) is discussed further below since it is a really significant development in the history of AI regulation and control.
During the last decade Congress passed several laws and provided funding for government use of AI. Much of this was focused on data protection and data privacy, but in 2019 Congress established a definition of AI and every year since has enacted legislation on items such as AI R&D and creating a new National AI Initiatives Office in OSTP (for overseeing and implementing AI strategy). Congress has also issued guidance for how it expects DoD to govern AI.
The October 2023 Executive Order on AI
The Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence is the longest executive order in Presidential History. I challenge anyone to read it in one sitting! The good news is you can use AI like this OpenAI GPT I wrote to summarize the EO for you. There is guidance for all corners of the federal government and those the government contracts with or influences through regulation or policy.
The EO applies to all agencies and departments in the executive branch. There are some independent agencies that Congress has established that the President has very limited authority over, but even these regulatory agencies are referenced in this EO and are strongly encouraged to comply with the guidance.
Perhaps the most significant portion of the EO related to control of AI is Section 10 on “Advancing Federal Government Use of AI.” The White House has created an interagency council to coordinate the development and use of AI in agency programs and operations. Agencies are also encouraged to designate permanent Chief AI Officers. This interagency council and agency leaders will play a critical role in establishment of rules by the White House OMB. The Chief AI Officers for each agency will hold primary responsibility in their agency for coordinating their agency’s use of AI and managing risks. Many requirements are being levied on these officers, most of which are incredibly well thought out, including the need for AI strategies in the agencies, consideration of cybersecurity, use of testing and even red teaming for AI solutions and independent evaluation of vendor claims on AI. However, there is also cause for concern with requirements for extensive control over all AI solutions. This is especially concerning now that some form of AI is being added to almost 100% of our IT. As with other complex initiatives, with good leadership the actions in this AI EO can be well done and result in incredible gain, or with poor leadership these can stifle innovation and slow us down.
Other Controls Over Government AI
Controls over today’s government AI solutions include every control ever established for secure governance over data and IT plus an additional layer due to the new nature of AI tools. Some of note include:
One thing a review of exiting AI regulations reveals is the challenge in finding a bad one to toss out. Each rule and regulation was created to address a real concern, which could have been associated with waste, fraud, abuse, privacy protections, security, civil rights, fairness or other needs.
This leads to what is probably the most significant suggestion regarding decontrol of AI. Since it is hard to single out controls to totally remove, agency leaders should be empowered to use their judgement on which controls to follow and which to exempt themselves from. It could be the most effective means of decontrol is allowing leaders at all levels to have means to deviate when their judgement gives them cause to.
Maybe the art form here is to ensure all agency Chief AI Officers are accels vice decels. CIOs, CTOs and agency heads should also be AI accels. This will help all use their judgement on which requirements for controls are relevant for the project at hand.
The next post in this series will capture an example of how the leaders can use accelerate AI solutions into their enterprise and comply with guidance through some strategic decontrol.
The next post in this series is titled: “Decontrol AI to Accelerate Solutions“
Related References: