Start your day with intelligence. Get The OODA Daily Pulse.

OpenAI CEO Sam Altman Testifies on “Oversight of A.I.: Rules for Artificial Intelligence” (Livestream: Tuesday, May 16th at 10 AM EST)

Fast on the heels of his May 4th  meeting at the White House with Vice President Kamala Harris and other top administration officials to discuss responsible AI innovation,  OpenAI CEO Sam Altman returns to D.C. – this time on Capitol Hill – as a witness testifying before the United States Senate Committee on the Judiciary, Subcommittee on Privacy, Technology, and the Law.

If your organization is sorting through the risk awareness concerns and competitive landscape brought on by the release of OpenAI’s ChatGPT late last year,  this hearing may well be as interesting as Bill Gates’ seminal testimony during the browser wars in 1995.  You can find the live stream here at 10 AM EST on Tuesday, May 16th.  It will then be available as an on-demand stream once the archived video and subcommittee testimony documents are available to update this post.

Altman will be joined in his testimony to the Senate panel with statements from Christina Montgomery, vice president and chief privacy and trust officer at tech giant IBM, and Gary Marcus, a professor emeritus at New York University.  Details of the live video stream can be found below.

Also included here is a brief overview of U.S. and EU-based legislative efforts which have gained traction since the impact of ChatGPT and Large Language Models in the AI marketplace.

In this climate, our current research questions include:

  • How do we sort out the larger societal risk concerns from the risk awareness and opportunities for advantage for corporations and SMBs?
  • For business, what are the tactical considerations and what is more “the long view”?
  • How do we frame the existential risk with the risk implicit in the exponential speed and scale at which this innovation is moving globally?
  • If we are, in fact, in agreement with the warnings voiced in the recent open letter and by AI pioneer Geoff Hinton, is a global governance effort even possible on an exponential speed and scale timeline? 

Oversight of A.I.: Rules for Artificial Intelligence

Subcommittee Hearing
Date: 
Time: 10:00 am
Location: Dirksen Senate Office Building Room 226
Presiding: Chair Blumenthal
Link to the live stream:  Oversight of A.I.: Rules for Artificial Intelligence | United States Senate Committee on the Judiciary

Witnesses

  • Samuel Altman

    CEO
    OpenAI

    San Francisco, CA

  • Christina Montgomery

    Chief Privacy & Trust Officer
    IBM
    Cortlandt Manor, NY
  • Gary Marcus
    Professor Emeritus
    New York University

Global AI Governance Efforts and US-based AI Legislative Activity

EU lawmakers pass draft of AI Act, includes copyright rules for generative AI | VentureBeat

“The AI Act will have a global impact…”

  • After months of negotiations and two years after draft rules were proposed, EU lawmakers have reached an agreement and passed a draft of the Artificial Intelligence (AI) Act, which would be the first set of comprehensive laws related to AI regulation.
  • The next stage is called the trilogue, when EU lawmakers and member states will negotiate the final details of the bill.
  • According to a report, the members of the European Parliament (MEPs) confirmed previous proposals to put stricter obligations on foundation models, a subcategory of “General Purpose AI” that includes tools such as ChatGPT. Under the proposals, companies that make generative AI tools such as ChatGPT would have to disclose if they have used copyrighted material in their systems.
  • The report cited one significant last-minute change in the draft of the AI Act related to generative AI models, which “would have to be designed and developed in accordance with EU law and fundamental rights, including freedom of expression.”
  • While a variety of state-based AI-related bills have been passed in the U.S., it is larger government regulation — in the form of the EU AI Act — that many in the AI and the legal community have been waiting for.
  • Back in December, Avi Gesser, partner at Debevoise and Plimpton and co-chair of the firm’s cybersecurity, privacy, and artificial intelligence practice group told VentureBeat that the AI Act is attempting to put together a risk-based regime to address the highest-risk outcomes of AI — while striking a balance so the laws do not clamp down on innovation.  “It’s about recognizing that there are going to be some low-risk use cases that don’t require a heavy burden of regulation,” he said. As with the privacy-focused GDPR, he explained, the EU AI Act will be an example of a comprehensive European law coming into effect and slowly trickling into various state- and sector-specific laws in the U.S.
  • [On May 10] in the National Law Review wrote, “The AI Act will have a global impact, as it will apply to organizations providing or using AI systems in the EU; and providers or users of AI systems located in a third country (including the UK and US), if the output produced by those AI systems is used in the EU.” (1)

EU lawmakers edge closer to AI Act, taking aim at facial recognition, profiling – CoinGeek

“Despite the changes, the AI Act still has some major points of concern.”

  • The AI Act will be the most comprehensive regulatory framework for the budding AI technology, says the EU Parliament. It has been in the works for two years now and aims to protect the region from the adverse effects of artificial intelligence.
  • The AI Act was first introduced by the European Commission in 2021 as a blueprint for “human and trustworthy AI.” However, civil societies, human rights activists, and even some legislators quickly raised concerns, saying it didn’t cover the extent of the use of AI in Europe.  Two years later, legislators say they have struck a balance between promoting AI development and protecting the masses.

EU sets the pace with the AI Act, but concerns remain

  • In its amendment of the AI Act, the European Commission addressed many of the concerns raised two years ago about the draft bill.
  • This includes a ban on facial recognition in publicly accessible spaces and racial profiling based on gender, race, religion, or political orientation.
  • It also outlaws predictive policing based on location or past criminal behavior, as well as the creation of facial recognition databases by scrapping biometric data from social media or CCTV footage. The latter has already landed Clearview AI, the controversial California-based startup, in trouble with French regulators.
  • Despite the changes, the AI Act still has some major points of concern.
    • One key concern is in risk classification: The Act classifies AI tools from low to unacceptable, the latter being any application that threatens safety, health, the environment, and fundamental rights. However, it leaves it to AI developers to self-police and determine the level of risk their programs pose.
    • While it’s tough on surveillance and facial recognition, the Act also gives law enforcement agencies leeway to use it to prosecute serious crimes. This, digital rights group EDRi says, “could incentivize mass retention of CCTV footage and biometric data.” (2)

Lawmakers Introduce Bill to Keep AI from Going Nuclear  – “Block Nuclear Launch by Autonomous AI Act of 2023” – Nextgov

“…lawmakers reignited the legislative charge into investigating and regulating how automated and artificial intelligence systems will be implemented in crucial operations.”

Congress has undertaken a flurry of new activity targeting artificial intelligence technologies in recent weeks, following the rapid advancement and ambiguous implications of AI systems.

Lawmakers from both chambers—namely Sen. Ed Markey, D-Mass., and Reps. Ted Lieu, D-Calif., Don Beyer, D-Va., and Ken Buck, R-Colo.—introduced new legislation Wednesday to respond to the mounting concern of unregulated AI systems making important societal decisions and manning infrastructure operations.

The bicameral and bipartisan bill aims to better regulate AI systems that could govern U.S. nuclear weapons. Titled the  “Block Nuclear Launch by Autonomous AI Act of 2023,” it primarily seeks to mandate a human element in all protocols and systems which govern state nuclear devices.

“As we live in an increasingly digital age, we need to ensure that humans hold the power alone to command, control, and launch nuclear weapons—not robots,” said Markey in a news release. “We need to keep humans in the loop on making life or death decisions to use deadly force, especially for our most dangerous weapons.”

Building off of the U.S. Department of Defense’s 2022 Nuclear Posture Review and Geneva Convention regulations, the bill would codify a required “meaningful human control” within any autonomous weapons systems.

The bill has garnered cosponosrship from fellow Sens. Bernie Sanders, I-Vt., Elizabeth Warren, D-Mass., and Jeff Merkley, D-Ore.

“While U.S. military use of AI can be appropriate for enhancing national security purposes, use of AI for deploying nuclear weapons without a human chain of command and control is reckless, dangerous, and should be prohibited,” said Buck. “I am proud to co-sponsor this legislation to ensure that human beings, not machines, have the final say over the most critical and sensitive military decisions.”

Sen. Mark Warner, D-Va., also followed suit in asking for more transparency surrounding AI systems, issuing several letters Wednesday to chief executive officers of prominent tech companies expanding into the AI field, including Tim Cook of Apple and Sundar Pichai of Google.  Warner implored tech leadership to thoroughly investigate the harms documented in AI and machine learning technologies, namely learned bias through data input.

Warner’s efforts and the new Block Nuclear Launch Act mirror ongoing federal inquiries into better regulating the emerging but powerful AI/ML software scene. Other lawmakers have recently used AI tools like the popular ChatGPT to make a case for its regulation, and its place in nuclear technologies specifically has been discussed by the Nuclear Regulatory Commission, which released a draft of its first Artificial Intelligence Strategic Plan last June. Like the bill, the Commission’s document also underscores the need for consistent human-machine interaction. (3)

House Bill Mandates Disclosure of AI-Generated Content in Political Ads –  “REAL Political Ads Act” – Nextgov

“The legislation follows the Republican National Committee’s release of an entirely AI-generated video…”

A House Democrat introduced legislation on Tuesday requiring political advertisements to include a disclaimer if they were created using artificial intelligence. The proposal comes as concerns about the use of AI software—and its potential to generate entirely fake or misleading text, audio and video—continue to mount ahead of the 2024 presidential primary season.

The bill—the REAL Political Ads Act—was introduced by Rep. Yvette Clarke, D-N.Y., who has been a prominent voice in Congress about the potential harms and biases of AI-generated content. Clarke, who serves on the House Energy and Commerce Committee and the House ​​Homeland Security Committee, previously introduced legislation in 2019 and 2021 that would require that deep fakes—digitally manipulated photos, video or audio—include “digital watermarks” and a written disclaimer stating that the pieces of media had been altered or generated.

Clarke’s legislation would amend federal campaign election laws to require that political ads “include a statement within the contents of the advertisements if generative AI was used to generate any image or video footage in the advertisements.”

In a statement, Clarke warned that “the upcoming 2024 election cycle will be the first time in U.S. history where AI-generated content will be used in political ads by campaigns, parties, and Super PACs.”

“Unfortunately, our current laws have not kept pace with the rapid development of artificial intelligence technologies,” she added. “If AI-generated content can manipulate and deceive people on a large scale, it can have devastating consequences for our national security and election security. It’s time we sound the alarm, and work to ensure our campaign finance laws keep pace with the innovation of new technologies.”

The introduction of Clarke’s bill comes as the use of generative AI tools has crossed over into the world of political campaigns. Last week, following President Joe Biden’s reelection announcement, the Republican National Committee released a video that it said was entirely created through the use of AI software. The ad envisions a dystopian future where, after winning the 2024 presidential election, Biden’s leadership is undermined by a series of domestic and international crises, including a Chinese invasion of Taiwan.

Clarke told The Washington Post in a May 2 article that her legislation was in direct response to the RNC’s video, which included a disclaimer in the top-left corner stating that it was “built entirely with AI imagery.” She warned, however, that “there will be those who will not want to disclose that it’s AI-generated, and we want to protect against that, particularly when we look at the political season before us.”

While generative AI has not played a prominent role in political campaigns until now, digitally altered videos and audio have been used to spread mis- and disinformation in recent years. Ahead of the 2020 elections, a video of then-House Speaker Nancy Pelosi, D-Calif. was manipulated to make it appear as though she was intoxicated while giving a speech. That video, as well as a similarly doctored video from 2019, received millions of views on social media platforms. (4)

New AI Research Funding to Focus on 6 Areas – Nextgov

“The federal government aims to capitalize on the rapid innovation in the artificial intelligence sector.”

The Biden administration is opening seven new artificial intelligence laboratories, fueled by $140 million in federal funding, the White House announced Thursday. The National Science Foundation will helm operations, with support from fellow government agencies. The institutes will focus on six research topics:

  • Trustworthy AI, under the University of Maryland-led Institute for Trustworthy AI in Law & Society.
  • Intelligent agents for cybersecurity, under the University of California Santa Barbara-led AI Institute for Agent-based Cyber Threat Intelligence and Operation.
  • Climate-smart agriculture and forestry, under the University of Minnesota Twin Cities-led AI Institute for Climate-Land Interactions, Mitigation, Adaptation, Tradeoffs and Economy.
  • Neural and cognitive foundations of AI, under the Columbia University-led AI Institute for Artificial and Natural Intelligence.
  • AI for decision-making, under the Carnegie Mellon University-led AI-Institute for Societal Decision-Making.
  • And AI-augmented learning to expand education opportunities and improve student outcomes, under both the University of Illinois, Urbana-Champaign-led AI Institute for Inclusive Intelligent Technologies for Education and the University at Buffalo-led AI Institute for Exceptional Education.

The broad goals within these research initiatives are to harness AI technologies to support human health and development research, support cyber defenses and aid climate-resilient agricultural practices.

Public sector-funded research and enhanced private sector cooperation are two of the new commitments the Biden administration will incorporate in its evolving tech policy surrounding emerging and critical technology systems.

The major influx in new funding signals the federal government’s intent to continue innovation in AI and machine learning technologies, while simultaneously working to mitigate risks posed by more generative technologies. (5)

Senate Bill Looks to Train AI-Ready Workforce, Focus on Risk Mitigation – Nextgov

“The bipartisan legislation…aims to help public sector employees catch up with advancing AI technologies.”

New legislation is responding to the federal workforce’s knowledge gap concerning advancing artificial intelligence systems, with a team of bipartisan senators aiming to create a new training program specifically for leaders at government agencies.

Introduced by Sens. Gary Peters, D-Mich., and Mike Braun, R-Ind., the Artificial Intelligence Leadership Training Act, first announced on May 11, establishes a subagency within the Office of Personnel Management that focuses on training covered and eligible employees in artificial intelligence systems that may be incorporated into federal operations.

The ultimate goal of the bill is to improve the federal workforce’s skills and acumen regarding AI applications, a technology that stands to continue to rapidly evolve and seep into daily functions.

“As the federal government continues to invest in and use artificial intelligence tools, decision-makers in the federal government must have the appropriate training to ensure this technology is used responsibly and ethically,” said Peters in a press release. “With AI training, federal agency leaders will have the expertise needed to ensure this technology benefits the American people and to mitigate potential harms, such as bias or discrimination.”

Some of the mandated subjects for the education program stipulated in the bill include defining AI, basic functionality, risk and benefit analyses, how data informs AI algorithms, risk mitigation techniques and a broader infrastructure to govern AI system deployment. It would also require updates to the curriculum from the OPM director.

“In the past couple of years, we have seen unprecedented development and adoption of AI across industries. We must ensure that government leaders are trained to keep up with the advancements in AI and recognize the benefits and risks of this tool,” Braun said.

Identifying risks inherent to AI systems is a key part of the proposed curriculum for the program created by the bill.

The focus on potential hazards in AI systems handling critical data comes amid sophisticated generative AI softwares capable of further spreading misinformation and mishandling data—two problems federal agencies are looking to prevent.

“The training aims to help federal leaders understand the capabilities, risks, and ethical implications associated with AI, so they can better determine whether an AI capability is appropriate to meet their mission requirements,” Peters’ press release says.

The Artificial Intelligence Leadership Training Act takes cues from government officials who have long advocated for a more tech-savvy workforce, and compliments the AI Training Act, another bill introduced by Peters that became law in October 2022.

Several other similar pieces of legislation have circulated on Capitol Hill in recent years that intend to address existing knowledge gaps the federal workforce has concerning advanced, emerging technologies like AI. (6)

https://oodaloop.com/ooda-original/2023/04/26/the-cybersecurity-implications-of-chatgpt-and-enabling-secure-enterprise-use-of-large-language-models/

https://oodaloop.com/archive/2019/02/27/securing-ai-four-areas-to-focus-on-right-now/

https://oodaloop.com/archive/2023/05/09/ai-enabled-image-generation-midjourney-dall-e-stable-diffusion/

https://oodaloop.com/archive/2023/05/08/a-methodological-note-chatgpt-is-not-ready-for-criminal-network-analysis-from-unstructured-data/

https://oodaloop.com/archive/2023/05/08/the-ooda-network-on-the-real-danger-of-ai-innovation-at-exponential-speed-and-scale-and-not-adequately-addressing-ai-governance/

 

Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.