Start your day with intelligence. Get The OODA Daily Pulse.
Consistent with our recent analysis of government regulation (or overregulation?) of AI, and the introduction of the concept of the “decontrol” of AI, the recent launch of the AI Safety Institute (organized as a public/private consortium) begs the same questions: although a public/private collaboration, is the AI Safety Institute an extension of an overall climate of over-regulation of AI in the U.S.? Or it is an architecture that enhances governmental decontrol?
“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence.”
The Biden administration on [Feb 8th] said leading artificial intelligence companies are among more than 200 entities joining a new U.S. consortium to support the safe development and deployment of generative AI.
The consortium will be housed under the U.S. AI Safety Institute (USAISI). The group is tasked with working on priority actions outlined in President Biden’s October AI executive order “including developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.”
Major AI companies last year pledged to watermark AI-generated content to make the technology safer. Red-teaming has been used for years in cybersecurity to identify new risks, with the term referring to U.S. Cold War simulations where the enemy was termed the “red team.” Biden’s order directed agencies to set standards for that testing and to address related chemical, biological, radiological, nuclear, and cybersecurity risks. In December, the Commerce Department said it was taking the first step toward writing key standards and guidance for the safe deployment and testing of AI. The consortium represents the largest collection of test and evaluation teams and will focus on creating foundations for a “new measurement science in AI safety,” Commerce said.
“The consortium represents the largest collection of test and evaluation teams established to date and will focus on establishing the foundations for a new measurement science in AI safety.”
From the Department of Commerce announcement:
…U.S. Secretary of Commerce Gina Raimondo announced the creation of the U.S. AI Safety Institute Consortium (AISIC), which will unite AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy artificial intelligence (AI).
The consortium includes more than 200 member companies and organizations that are on the frontlines of creating and using the most advanced AI systems and hardware, the nation’s largest companies and most innovative startups, civil society and academic teams that are building the foundational understanding of how AI can and will transform our society, and representatives of professions with deep engagement in AI’s use today. The consortium represents the largest collection of test and evaluation teams established to date and will focus on establishing the foundations for a new measurement science in AI safety. The consortium also includes state and local governments, as well as non-profits, and will work with organizations from like-minded nations that have a key role to play in developing interoperable and effective tools for safety around the world.
Some initial observations:
More than anything, questions abound – which we will be tracking and trying to frame further and answer over the course of 2024, including:
The US AI Safety Institute Consortium (AISIC) provided a list of the inaugural members of the Institute:
A • Accel AI Institute • Accenture LLP • Adobe • Advanced Micro Devices (AMD) • AFL-CIO Technology Institute (Provisional Member) • AI Risk and Vulnerability Alliance • AIandYou • Allen Institute for Artificial Intelligence • Alliance for Artificial Intelligence in Healthcare • Altana • Alteryx • Amazon.com • American University, Kogod School of Business • AmpSight • Anika Systems Incorporated • Anthropic • Apollo Research • Apple • Ardent Management Consulting • Aspect Labs • Atlanta University Center Consortium • Autodesk, Inc.
B • BABL AI Inc. • Backpack Healthcare • Bank of America • Bank Policy Institute • Baylor College of Medicine • Beck’s Superior Hybrids • Benefits Data Trust • Humane Intelligence • Booz Allen Hamilton • Boston Scientific • BP • BSA | The Software Alliance • BSI Group America
C • Canva • Capitol Technology University • Carnegie Mellon University • Casepoint • Center for a New American Security • Center For AI Safety • Center for Security and Emerging Technologies (Georgetown University) • Center for Democracy and Technology • Centers for Medicare & Medicaid Services • Centre for the Governance of AI • Cisco Systems • Citadel AI • Citigroup • CivAI • Civic Hacker LLC • Cleveland Clinic • Coalition for Health AI (CHAI) (Provisional Member) • Cohere • Common Crawl Foundation • Cornell University • Cranium AI • Credo AI • CrowdStrike • Cyber Risk Institute
D • Dark Wolf Solutions • Data & Society Research Institute • Databricks • Dataiku • DataRobot • Deere & Company • Deloitte • Beckman Coulter • Digimarc • DLA Piper • Drexel University • Drummond Group • Duke University • The Carl G Grefenstette Center for Ethics at Duquesne University
E • EBG Advisors • EDM Council • Eightfold AI • Elder Research • Electronic Privacy Information Center • Elicit • EleutherAI Institute • Emory University • Enveil • EqualAI • Erika Britt Consulting • Ernst & Young, LLP • Exponent
F • FAIR Institute • FAR AI • Federation of American Scientists • FISTA • ForHumanity • Fortanix, Inc. • Free Software Foundation • Frontier Model Forum • Financial Services Information Sharing and Analysis Center (FS-ISAC) • Future of Privacy Forum
G • Gate Way Solutions • George Mason University • Georgia Tech Research Institute • GitHub • Gladstone AI • Google • Gryphon Scientific • Guidepost Solutions
H • Hewlett Packard Enterprise • Hispanic Tech and Telecommunications Partnership (HTTP) • Hitachi Vantara Federal • Hugging Face • Human Factors and Ergonomics Society • Humane Intelligence • Hypergame AI
I • IBM • Imbue • Indiana University • Inflection AI • Information Technology Industry Council • Institute for Defense Analyses • Institute for Progress • Institute of Electrical and Electronics Engineers, Incorporated (IEEE) • Institute of International Finance • Intel Corporation • Intertrust Technologies • Iowa State University, Translational AI Center (TrAC)
J • JPMorgan Chase • Johns Hopkins University
K • Kaiser Permanente • Keysight Technologies • Kitware, Inc. • Knexus Research • KPMG
L • LA Tech4Good • Leadership Conference Education Fund, Center for Civil Rights and Technology • Leela AI • Linux Foundation, AI & Data • Lucid Privacy Group (Provisional Member) • Lumenova AI
M • Magnit Global Solutions • Manatt, Phelps & Phillips • MarkovML • Massachusetts Institute of Technology, Lincoln Laboratory • Mastercard • Meta • Microsoft • MLCommons • Model Evaluation and Threat Research (METR, formerly ARC Evals) • Modulate • MongoDB
N • National Fair Housing Alliance • National Retail Federation • New York Public Library • New York University • NewsGuard Techologies • Northrop Grumman • NVIDIA
O • ObjectSecurity LLC • Ohio State University • O’Neil Risk Consulting & Algorithmic Auditing, Inc. (ORCAA) • OpenAI • OpenPolicy • OWASP (AI Exchange & Top 10 for LLM Apps) • University of Oklahoma, Data Institute for Societal Challenges (DISC) • University of Oklahoma, NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES)
P • Palantir • Partnership on AI (PAI) • Pfizer • Preamble • PwC • Princeton University • Purdue University, Governance and Responsible AI Lab (GRAIL)
Q • Qualcomm Incorporated • Queer in AI
R • RAND Corporation • Redwood Research Group • Regions Bank • Responsible AI Institute • Robust Intelligence • RTI International
S • SaferAI • Salesforce • SAS Institute • SandboxAQ • Scale AI • Science Applications International Corporation • Scripps College • SecureBio • Society of Actuaries Research Institute • Software & Information Industry Association • SonarSource • SRI International • Stability AI (Provisional Member) • stackArmor • Stanford Institute for Human-Centered AI, Stanford Center for Research on Foundation Models, Stanford Regulation, Evaluation, and Governance Lab • State of California, Department of Technology • State of Kansas, Office of Information Technology Services • StateRAMP • Subtextive • Syracuse University
T • Taraaz • TensTorrent USA • Texas A&M University • Thomson Reuters (Provisional Member) • Touchstone Evaluations • Trustible • TrueLaw • Trufo
U • UnidosUS • UL Research Institutes • University at Albany, SUNY Research Foundation • University at Buffalo, Institute for Artificial Intelligence and Data Science • University at Buffalo, Center for Embodied Autonomy and Robotics • University of Texas at San Antonio (UTSA) • University of Maryland, College Park • University Of Notre Dame Du Lac • University of Pittsburgh • University of South Carolina, AI Institute • University of Southern California • U.S. Bank National Association
V • Vanguard • Vectice • Visa
W • Wells Fargo & Company • Wichita State University, National Institute for Aviation Research • William Marsh Rice University • Wintrust Financial Corporation • Workday
“The National Institute for Standards and Technology (NIST) at Commerce will house the U.S. AI Safety Institute”
U.S. Secretary of Commerce Gina Raimondo announced [on Feb. 7th] key members of the executive leadership team to lead the U.S. AI Safety Institute (AISI), which will be established at the National Institute for Standards and Technology (NIST). Raimondo named Elizabeth Kelly to lead the Institute as its inaugural Director and Elham Tabassi to serve as Chief Technology Officer.
Elizabeth Kelly, as AISI Director, will be responsible for providing executive leadership, management, and oversight of the AI Safety Institute and coordinating with other AI policy and technical initiatives throughout the Department, NIST, and across the government. Elizabeth Kelly serves as Special Assistant to the President for Economic Policy at the White House National Economic Council, where she helps lead the Administration’s efforts on financial regulation and technology policy, including artificial intelligence. Elizabeth was a driving force behind the domestic components of the AI executive order, spearheading efforts to promote competition, protect privacy, and support workers and consumer, and helped lead Administration engagement with allies and partners on AI governance. Elizabeth holds a J.D. from Yale Law School, an MSc in Comparative Social Policy from the University of Oxford, and a B.A. from Duke University.
Elham Tabassi, as the Chief Technology Officer, will be responsible for leading key technical programs of the institute, focused on supporting the development and deployment of AI that is safe, secure and trustworthy. She will be responsible for shaping efforts at NIST and with the broader AI community to conduct research, develop guidance, and conduct evaluations of AI models including advanced large language models in order to identify and mitigate AI safety risks. Elham Tabassi has played a leading role in the Department’s AI work at NIST, and in 2023 was named one of TIME Magazine’s 100 Most Influential People in AI. As NIST’s Trustworthy and Responsible AI program lead, she spearheaded development of the widely acclaimed NIST AI Risk Management Framework (AI RMF), a voluntary tool that supports better management of risks to individuals, organizations, and society associated with AI. Tabassi is a Senior Research Scientist and most recently served as the Associate Director for Emerging Technologies in NIST’s Information Technology Laboratory (ITL). In that role, she helped guide strategic direction for research, development, standards, testing and evaluation in the areas of emerging technologies such as artificial intelligence.
Related References: