Start your day with intelligence. Get The OODA Daily Pulse.

In a previous post in this series we raised a question: Is the US Government Over-Regulating AI? That post discussed the word “decontrol” as a concept whose time has come in discussion of AI use in the government.

But how can we know if we are over regulated without an overview of the regulatory environment? Here we provide a short overview of the regulation of government IT with an eye towards shedding light on ways to accelerate AI through some decontrol.

The History of Government Technology Control

A useful starting point in tracking how the government controls technology is the 1970 Ware Report, formally known as as the “Security Controls for Computer Systems: Report of the Defense Science Board Task Force on Computer Security.” During this era, technology controls were heavily influenced by the Cold War. Protection of data was the focus. The second major driver of technology controls in this time were the civil rights movement and the widespread recognition that technology could be implemented in unfair ways or could results in threats to privacy. Both these two drivers, national security and Civil Rights, remain key drivers in our controls today.

In 2002 the government became more focused on the smart use of technology to deliver citizen services and to support national security missions and NIST began a new focus on production of smart controls over government technology. This was the year congress passed the E-Government Act including a provision called FISMA (the Federal Information Security Management Act). It requires control over IT by:

  • Risk Management: Agencies must conduct periodic risk assessments.
  • Information System Controls: Implementation of appropriate security controls.
  • Security Plans: Agencies are required to develop, document, and implement an agency-wide information security program.
  • Annual Reviews: Agencies must conduct annual reviews of their information security programs.
  • Reporting: Agencies report their findings to the Office of Management and Budget (OMB) for oversight and guidance.

During the Obama administration a new focus on rules for the smart use of AI resulted in an acceleration of controls for federally funded AI research. During the Trump administration new technical standards for government use of AI were promulgated in an Executive Order on “Maintaining American Leadership in Artificial Intelligence” which launched the American AI initiative. The Biden Administration has issued several major guidance documents and initiatives starting with a “Blueprint for an AI Bill of Rights: Making Automated System Work for the American People” and and EO on rooting out bias in AI. The latest Executive Order on AI (signed 30 Oct 2023) is discussed further below since it is a really significant development in the history of AI regulation and control.

During the last decade Congress passed several laws and provided funding for government use of AI. Much of this was focused on data protection and data privacy, but in 2019 Congress established a definition of AI and every year since has enacted legislation on items such as AI R&D and creating a new National AI Initiatives Office in OSTP (for overseeing and implementing AI strategy). Congress has also issued guidance for how it expects DoD to govern AI.

The October 2023 Executive Order on AI

The Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence is the longest executive order in Presidential History. I challenge anyone to read it in one sitting! The good news is you can use AI like this OpenAI GPT I wrote to summarize the EO for you. There is guidance for all corners of the federal government and those the government contracts with or influences through regulation or policy.

The EO applies to all agencies and departments in the executive branch. There are some independent agencies that Congress has established that the President has very limited authority over, but even these regulatory agencies are referenced in this EO and are strongly encouraged to comply with the guidance.

Perhaps the most significant portion of the EO related to control of AI is Section 10 on “Advancing Federal Government Use of AI.” The White House has created an interagency council to coordinate the development and use of AI in agency programs and operations. Agencies are also encouraged to designate permanent Chief AI Officers. This interagency council and agency leaders will play a critical role in establishment of rules by the White House OMB. The Chief AI Officers for each agency will hold primary responsibility in their agency for coordinating their agency’s use of AI and managing risks. Many requirements are being levied on these officers, most of which are incredibly well thought out, including the need for AI strategies in the agencies, consideration of cybersecurity, use of testing and even red teaming for AI solutions and independent evaluation of vendor claims on AI. However, there is also cause for concern with requirements for extensive control over all AI solutions. This is especially concerning now that some form of AI is being added to almost 100% of our IT. As with other complex initiatives, with good leadership the actions in this AI EO can be well done and result in incredible gain, or with poor leadership these can stifle innovation and slow us down.

Other Controls Over Government AI

Controls over today’s government AI solutions include every control ever established for secure governance over data and IT plus an additional layer due to the new nature of AI tools. Some of note include:

  • NIST coordinated across government and industry to establish AI guidance via the AI Risk Management Framework. This is a framework vice direct controls, but in the hands of decels could be interpreted as blanket guidance vice the smart risk management approach it is designed to be.
  • NIST Cybersecurity Requirements: NIST has an extensive collection of standards, guidance and frameworks for cybersecurity and risk management. Most are only advisory for industry but many are mandatory for government. Many end up being required compliance items for government contractors as well.
  • DoD Ethics Guidance: Created after extensive coordination with AI thought leaders, ethicists and military professionals, these ethics flow from the need to support mission while also ensuring commitment to the U.S. Constitution, U.S. laws, the Law of War and international treaties.
  • Responsible AI Toolkit: An initiative of DoD CDAO, this framework can help accelerate AI into DoD organizations and missions while ensuring responsible use of AI.
  • White House OMB Guidance on AI: OMB guidance to agencies is requiring each federal department and agency to take action on AI including, as mentioned in the EO, appointing AI leaders. Agencies are also to develop AI strategies for OMB approval, add safeguards for generative AI use, establish AI governance boards, and expand reporting on how agencies use AI including more details on AI system risk and how the agency is managing these risks.
  • Long list of existing laws on use of data in government and industry: We capture a long list of laws applicable to AI in our post on “When Artificial Intelligence Goes Wrong“. Not all of these are relevant to the federal government, but most are.
  • FAR/DFARS/CMMC: These are significant control points over government contractors. These rules require any contractor who holds government data to process it their way and requires extensive use of NIST guidelines.
  • Individual Agency and Department Policies: Every Cabinet level department and every agency is also building out AI policies and procedures. Most have already named their AI leaders and those that have not are in the process of doing that. AI leaders are being established at multiple levels in large organizations.

One thing a review of exiting AI regulations reveals is the challenge in finding a bad one to toss out. Each rule and regulation was created to address a real concern, which could have been associated with waste, fraud, abuse, privacy protections, security, civil rights, fairness or other needs.

This leads to what is probably the most significant suggestion regarding decontrol of AI. Since it is hard to single out controls to totally remove, agency leaders should be empowered to use their judgement on which controls to follow and which to exempt themselves from. It could be the most effective means of decontrol is allowing leaders at all levels to have means to deviate when their judgement gives them cause to.  

Maybe the art form here is to ensure all agency Chief AI Officers are accels vice decels. CIOs, CTOs and agency heads should also be AI accels. This will help all use their judgement on which requirements for controls are relevant for the project at hand.

The next post in this series will capture an example of how the leaders can use accelerate AI solutions into their enterprise and comply with guidance through some strategic decontrol.

The next post in this series is titled: “Decontrol AI to Accelerate Solutions

Related References:

Bob Gourley

About the Author

Bob Gourley

Bob Gourley is an experienced Chief Technology Officer (CTO), Board Qualified Technical Executive (QTE), author and entrepreneur with extensive past performance in enterprise IT, corporate cybersecurity and data analytics. CTO of OODA LLC, a unique team of international experts which provide board advisory and cybersecurity consulting services. OODA publishes OODALoop.com. Bob has been an advisor to dozens of successful high tech startups and has conducted enterprise cybersecurity assessments for businesses in multiple sectors of the economy. He was a career Naval Intelligence Officer and is the former CTO of the Defense Intelligence Agency.