Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > Will The AI Safety Institute – and the Consortium Model – Insure an AI-safe Future?

Consistent with our recent analysis of government regulation (or overregulation?) of AI, and the introduction of the concept of the “decontrol” of AI, the recent launch of the AI Safety Institute (organized as a public/private consortium) begs the same questions:  although a public/private collaboration, is the AI Safety Institute an extension of an overall climate of over-regulation of AI in the U.S.? Or it is an architecture that enhances governmental decontrol? 

US says leading AI companies join safety consortium to address risks

“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence.”

The Biden administration on [Feb 8th] said leading artificial intelligence companies are among more than 200 entities joining a new U.S. consortium to support the safe development and deployment of generative AI.

The consortium will be housed under the U.S. AI Safety Institute (USAISI).  The group is tasked with working on priority actions outlined in President Biden’s October AI executive order “including developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.”

Major AI companies last year pledged to watermark AI-generated content to make the technology safer. Red-teaming has been used for years in cybersecurity to identify new risks, with the term referring to U.S. Cold War simulations where the enemy was termed the “red team.”  Biden’s order directed agencies to set standards for that testing and to address related chemical, biological, radiological, nuclear, and cybersecurity risks.  In December, the Commerce Department said it was taking the first step toward writing key standards and guidance for the safe deployment and testing of AI.  The consortium represents the largest collection of test and evaluation teams and will focus on creating foundations for a “new measurement science in AI safety,” Commerce said.

Biden-Harris Administration Announces First-Ever Consortium Dedicated to AI Safety

“The consortium represents the largest collection of test and evaluation teams established to date and will focus on establishing the foundations for a new measurement science in AI safety.”

From the Department of Commerce announcement:

…U.S. Secretary of Commerce Gina Raimondo announced the creation of the U.S. AI Safety Institute Consortium (AISIC), which will unite AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy artificial intelligence (AI).

The consortium includes more than 200 member companies and organizations that are on the frontlines of creating and using the most advanced AI systems and hardware, the nation’s largest companies and most innovative startups, civil society and academic teams that are building the foundational understanding of how AI can and will transform our society, and representatives of professions with deep engagement in AI’s use today. The consortium represents the largest collection of test and evaluation teams established to date and will focus on establishing the foundations for a new measurement science in AI safety. The consortium also includes state and local governments, as well as non-profits, and will work with organizations from like-minded nations that have a key role to play in developing interoperable and effective tools for safety around the world.

What Next?

Some initial observations:

  • Highlighted below in the list of AISI members, we are encouraged to see the major players in AI and the research organizations that have influenced our analysis of AI over years are included in the inaugural list of members of the AI Safety Institute.
  • Also encouraging is the wide breadth of industry sectors that are represented on the list of members, from financial services to biotech, high tech and defense.
  • Conspicuously absent from the list:  Tristan Harris’ Center for Humane Technologies, which has aggressively positioned the AI Dilemma argument and the double exponential pace of the technology in the last 18 months.
  • Hard to say now if the 200+ member scale of the consortium is an operational help or hindrance. Lots of directions the self-interest of each participating organization and industry sector could take this effort.
  • The Chips and Science Act is housed at Commerce and NIST.  It is questionable these organizations have the resilience and bandwidth to be managing and implementing both the Chip Wars and the Future of AI under one departmental roof.
  • AISI Leadership – Elizabeth Kelly and Elham Tabassi – are clearly competent and their bios are stellar (also, see below).  But it will really be all about leadership and management as the rubber meets the road of this – an unprecedented technological challenge.  Plenty of unknowns and uncertainty here – no matter how seasoned the leadership may be.

More than anything, questions abound – which we will be tracking and trying to frame further and answer over the course of 2024, including:

  • A new U.S. consortium to support the safe development and deployment of generative AI?  Is the AISI consortium structure more private sector self-regulation in partnership with the U.S. government, or in the end is it more “regulation-regulation and/or de-regulation?”
  • Developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content?  Does the voluntary participation by the private sector, universities, and research organizations decouple the consortium efforts from whatever lays ahead in terms of formal regulation by the legislative branch?  Or will the efforts of the AISI inform and shape future regulation?  How will the AISI be structured to share its findings with Congress?
  • The consortium represents the largest collection of test and evaluation teams and will focus on creating foundations for a “new measurement science in AI safety?  Is the creation of a new measurement science in AI safety code for regulation?  Or is it a scientific framework for AI Safety that consortium members will make a commitment to adhere to moving forward?  What is the historical precedent for the creation of a “new measurement science”?
  • The consortium model?  Is the AISI an extension of the “decontrol of AI” based on its distributed, collaborative, public/private architecture?  Or is it, in the end, a centralization at NIST and Commerce?
  • Finally, the perennial research question here at OODA Loop:  Will the “Double Exponential” Growth of Artificial Intelligence Render Global AI Governance and Safety Efforts Futile?

AI Safety Institute Membership

The US AI Safety Institute Consortium (AISIC) provided a list of the inaugural members of the Institute:

A • Accel AI Institute • Accenture LLP • Adobe • Advanced Micro Devices (AMD) • AFL-CIO Technology Institute (Provisional Member) • AI Risk and Vulnerability Alliance • AIandYou • Allen Institute for Artificial Intelligence • Alliance for Artificial Intelligence in Healthcare • Altana • Alteryx • Amazon.com • American University, Kogod School of Business • AmpSight • Anika Systems Incorporated • Anthropic • Apollo Research • Apple • Ardent Management Consulting • Aspect Labs • Atlanta University Center Consortium • Autodesk, Inc.

B • BABL AI Inc. • Backpack Healthcare • Bank of America • Bank Policy Institute • Baylor College of Medicine • Beck’s Superior Hybrids • Benefits Data Trust • Humane Intelligence • Booz Allen Hamilton • Boston Scientific • BP • BSA | The Software Alliance • BSI Group America

C • Canva • Capitol Technology University • Carnegie Mellon University • Casepoint • Center for a New American Security • Center For AI SafetyCenter for Security and Emerging Technologies (Georgetown University) • Center for Democracy and Technology • Centers for Medicare & Medicaid Services • Centre for the Governance of AI • Cisco Systems • Citadel AI • Citigroup • CivAI • Civic Hacker LLC • Cleveland Clinic • Coalition for Health AI (CHAI) (Provisional Member) • Cohere • Common Crawl Foundation • Cornell University • Cranium AI • Credo AI • CrowdStrike • Cyber Risk Institute

D • Dark Wolf Solutions • Data & Society Research Institute DatabricksDataikuDataRobot • Deere & Company • Deloitte • Beckman Coulter • Digimarc • DLA Piper • Drexel University • Drummond Group • Duke University • The Carl G Grefenstette Center for Ethics at Duquesne University

E • EBG Advisors • EDM Council • Eightfold AI • Elder Research • Electronic Privacy Information Center • Elicit • EleutherAI Institute • Emory University • Enveil • EqualAI • Erika Britt Consulting • Ernst & Young, LLP • Exponent

F • FAIR Institute • FAR AI • Federation of American Scientists • FISTA • ForHumanity • Fortanix, Inc. • Free Software Foundation • Frontier Model Forum • Financial Services Information Sharing and Analysis Center (FS-ISAC) • Future of Privacy Forum

G • Gate Way Solutions • George Mason University • Georgia Tech Research Institute • GitHub • Gladstone AI • Google • Gryphon Scientific • Guidepost Solutions

H • Hewlett Packard Enterprise • Hispanic Tech and Telecommunications Partnership (HTTP) • Hitachi Vantara Federal • Hugging Face • Human Factors and Ergonomics Society • Humane Intelligence • Hypergame AI

I • IBM • Imbue • Indiana University • Inflection AI • Information Technology Industry Council • Institute for Defense Analyses • Institute for Progress • Institute of Electrical and Electronics Engineers, Incorporated (IEEE) • Institute of International Finance • Intel Corporation • Intertrust Technologies • Iowa State University, Translational AI Center (TrAC)

J • JPMorgan Chase • Johns Hopkins University

K • Kaiser Permanente • Keysight Technologies • Kitware, Inc. • Knexus Research • KPMG

L • LA Tech4Good • Leadership Conference Education Fund, Center for Civil Rights and Technology • Leela AI • Linux Foundation, AI & Data • Lucid Privacy Group (Provisional Member) • Lumenova AI

M • Magnit Global Solutions • Manatt, Phelps & Phillips • MarkovML • Massachusetts Institute of Technology, Lincoln Laboratory • Mastercard • Meta • Microsoft • MLCommons • Model Evaluation and Threat Research (METR, formerly ARC Evals) • Modulate • MongoDB

N • National Fair Housing Alliance • National Retail Federation • New York Public Library • New York University • NewsGuard Techologies • Northrop Grumman • NVIDIA

O • ObjectSecurity LLC • Ohio State University • O’Neil Risk Consulting & Algorithmic Auditing, Inc. (ORCAA) • OpenAI • OpenPolicy • OWASP (AI Exchange & Top 10 for LLM Apps) • University of Oklahoma, Data Institute for Societal Challenges (DISC) • University of Oklahoma, NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES)

P • PalantirPartnership on AI (PAI) • Pfizer • Preamble • PwC • Princeton University • Purdue University, Governance and Responsible AI Lab (GRAIL)

Q • Qualcomm Incorporated • Queer in AI

R • RAND Corporation • Redwood Research Group • Regions Bank • Responsible AI Institute • Robust Intelligence • RTI International

S • SaferAI • Salesforce • SAS Institute • SandboxAQ • Scale AI • Science Applications International Corporation • Scripps College • SecureBio • Society of Actuaries Research Institute • Software & Information Industry Association • SonarSource • SRI International • Stability AI (Provisional Member) • stackArmor • Stanford Institute for Human-Centered AI, Stanford Center for Research on Foundation Models, Stanford Regulation, Evaluation, and Governance Lab • State of California, Department of Technology • State of Kansas, Office of Information Technology Services • StateRAMP • Subtextive • Syracuse University

T • Taraaz • TensTorrent USA • Texas A&M University • Thomson Reuters (Provisional Member) • Touchstone Evaluations • Trustible • TrueLaw • Trufo

U • UnidosUS • UL Research Institutes • University at Albany, SUNY Research Foundation • University at Buffalo, Institute for Artificial Intelligence and Data Science • University at Buffalo, Center for Embodied Autonomy and Robotics • University of Texas at San Antonio (UTSA) • University of Maryland, College Park • University Of Notre Dame Du Lac • University of Pittsburgh • University of South Carolina, AI Institute • University of Southern California • U.S. Bank National Association

V • Vanguard • Vectice • Visa

W • Wells Fargo & Company • Wichita State University, National Institute for Aviation Research • William Marsh Rice University • Wintrust Financial Corporation • Workday

U.S. Commerce Secretary Gina Raimondo Announces Key Executive Leadership at U.S. AI Safety Institute

“The National Institute for Standards and Technology (NIST) at Commerce will house the U.S. AI Safety Institute”

U.S. Secretary of Commerce Gina Raimondo announced [on Feb. 7th] key members of the executive leadership team to lead the U.S. AI Safety Institute (AISI), which will be established at the National Institute for Standards and Technology (NIST). Raimondo named Elizabeth Kelly to lead the Institute as its inaugural Director and Elham Tabassi to serve as Chief Technology Officer.

Elizabeth Kelly, as AISI Director, will be responsible for providing executive leadership, management, and oversight of the AI Safety Institute and coordinating with other AI policy and technical initiatives throughout the Department, NIST, and across the government. Elizabeth Kelly serves as Special Assistant to the President for Economic Policy at the White House National Economic Council, where she helps lead the Administration’s efforts on financial regulation and technology policy, including artificial intelligence. Elizabeth was a driving force behind the domestic components of the AI executive order, spearheading efforts to promote competition, protect privacy, and support workers and consumer, and helped lead Administration engagement with allies and partners on AI governance. Elizabeth holds a J.D. from Yale Law School, an MSc in Comparative Social Policy from the University of Oxford, and a B.A. from Duke University.

Elham Tabassi, as the Chief Technology Officer, will be responsible for leading key technical programs of the institute, focused on supporting the development and deployment of AI that is safe, secure and trustworthy. She will be responsible for shaping efforts at NIST and with the broader AI community to conduct research, develop guidance, and conduct evaluations of AI models including advanced large language models in order to identify and mitigate AI safety risks. Elham Tabassi has played a leading role in the Department’s AI work at NIST, and in 2023 was named one of TIME Magazine’s 100 Most Influential People in AI. As NIST’s Trustworthy and Responsible AI program lead, she spearheaded development of the widely acclaimed NIST AI Risk Management Framework (AI RMF), a voluntary tool that supports better management of risks to individuals, organizations, and society associated with AI. Tabassi is a Senior Research Scientist and most recently served as the Associate Director for Emerging Technologies in NIST’s Information Technology Laboratory (ITL). In that role, she helped guide strategic direction for research, development, standards, testing and evaluation in the areas of emerging technologies such as artificial intelligence.

Related References:

Tagged: AI NIST
Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.