Start your day with intelligence. Get The OODA Daily Pulse.
Fast on the heels of his May 4th meeting at the White House with Vice President Kamala Harris and other top administration officials to discuss responsible AI innovation, OpenAI CEO Sam Altman returns to D.C. – this time on Capitol Hill – as a witness testifying before the United States Senate Committee on the Judiciary, Subcommittee on Privacy, Technology, and the Law.
If your organization is sorting through the risk awareness concerns and competitive landscape brought on by the release of OpenAI’s ChatGPT late last year, this hearing may well be as interesting as Bill Gates’ seminal testimony during the browser wars in 1995. You can find the live stream here at 10 AM EST on Tuesday, May 16th. It will then be available as an on-demand stream once the archived video and subcommittee testimony documents are available to update this post.
Altman will be joined in his testimony to the Senate panel with statements from Christina Montgomery, vice president and chief privacy and trust officer at tech giant IBM, and Gary Marcus, a professor emeritus at New York University. Details of the live video stream can be found below.
Also included here is a brief overview of U.S. and EU-based legislative efforts which have gained traction since the impact of ChatGPT and Large Language Models in the AI marketplace.
In this climate, our current research questions include:
San Francisco, CA
“The AI Act will have a global impact…”
Artifical Intelligence: new transparency and risk-management rules for AI systems have been endorsed by Parliament's internal market and civil liberties committees.
All MEPs are expected to vote on the mandate in June so that negotiations can start with the Council.
— European Parliament (@Europarl_EN) May 11, 2023
“Despite the changes, the AI Act still has some major points of concern.”
EU sets the pace with the AI Act, but concerns remain
“…lawmakers reignited the legislative charge into investigating and regulating how automated and artificial intelligence systems will be implemented in crucial operations.”
Congress has undertaken a flurry of new activity targeting artificial intelligence technologies in recent weeks, following the rapid advancement and ambiguous implications of AI systems.
Lawmakers from both chambers—namely Sen. Ed Markey, D-Mass., and Reps. Ted Lieu, D-Calif., Don Beyer, D-Va., and Ken Buck, R-Colo.—introduced new legislation Wednesday to respond to the mounting concern of unregulated AI systems making important societal decisions and manning infrastructure operations.
The bicameral and bipartisan bill aims to better regulate AI systems that could govern U.S. nuclear weapons. Titled the “Block Nuclear Launch by Autonomous AI Act of 2023,” it primarily seeks to mandate a human element in all protocols and systems which govern state nuclear devices.
“As we live in an increasingly digital age, we need to ensure that humans hold the power alone to command, control, and launch nuclear weapons—not robots,” said Markey in a news release. “We need to keep humans in the loop on making life or death decisions to use deadly force, especially for our most dangerous weapons.”
Building off of the U.S. Department of Defense’s 2022 Nuclear Posture Review and Geneva Convention regulations, the bill would codify a required “meaningful human control” within any autonomous weapons systems.
The bill has garnered cosponosrship from fellow Sens. Bernie Sanders, I-Vt., Elizabeth Warren, D-Mass., and Jeff Merkley, D-Ore.
“While U.S. military use of AI can be appropriate for enhancing national security purposes, use of AI for deploying nuclear weapons without a human chain of command and control is reckless, dangerous, and should be prohibited,” said Buck. “I am proud to co-sponsor this legislation to ensure that human beings, not machines, have the final say over the most critical and sensitive military decisions.”
Sen. Mark Warner, D-Va., also followed suit in asking for more transparency surrounding AI systems, issuing several letters Wednesday to chief executive officers of prominent tech companies expanding into the AI field, including Tim Cook of Apple and Sundar Pichai of Google. Warner implored tech leadership to thoroughly investigate the harms documented in AI and machine learning technologies, namely learned bias through data input.
Warner’s efforts and the new Block Nuclear Launch Act mirror ongoing federal inquiries into better regulating the emerging but powerful AI/ML software scene. Other lawmakers have recently used AI tools like the popular ChatGPT to make a case for its regulation, and its place in nuclear technologies specifically has been discussed by the Nuclear Regulatory Commission, which released a draft of its first Artificial Intelligence Strategic Plan last June. Like the bill, the Commission’s document also underscores the need for consistent human-machine interaction. (3)
“The legislation follows the Republican National Committee’s release of an entirely AI-generated video…”
A House Democrat introduced legislation on Tuesday requiring political advertisements to include a disclaimer if they were created using artificial intelligence. The proposal comes as concerns about the use of AI software—and its potential to generate entirely fake or misleading text, audio and video—continue to mount ahead of the 2024 presidential primary season.
The bill—the REAL Political Ads Act—was introduced by Rep. Yvette Clarke, D-N.Y., who has been a prominent voice in Congress about the potential harms and biases of AI-generated content. Clarke, who serves on the House Energy and Commerce Committee and the House Homeland Security Committee, previously introduced legislation in 2019 and 2021 that would require that deep fakes—digitally manipulated photos, video or audio—include “digital watermarks” and a written disclaimer stating that the pieces of media had been altered or generated.
Clarke’s legislation would amend federal campaign election laws to require that political ads “include a statement within the contents of the advertisements if generative AI was used to generate any image or video footage in the advertisements.”
In a statement, Clarke warned that “the upcoming 2024 election cycle will be the first time in U.S. history where AI-generated content will be used in political ads by campaigns, parties, and Super PACs.”
“Unfortunately, our current laws have not kept pace with the rapid development of artificial intelligence technologies,” she added. “If AI-generated content can manipulate and deceive people on a large scale, it can have devastating consequences for our national security and election security. It’s time we sound the alarm, and work to ensure our campaign finance laws keep pace with the innovation of new technologies.”
The introduction of Clarke’s bill comes as the use of generative AI tools has crossed over into the world of political campaigns. Last week, following President Joe Biden’s reelection announcement, the Republican National Committee released a video that it said was entirely created through the use of AI software. The ad envisions a dystopian future where, after winning the 2024 presidential election, Biden’s leadership is undermined by a series of domestic and international crises, including a Chinese invasion of Taiwan.
Clarke told The Washington Post in a May 2 article that her legislation was in direct response to the RNC’s video, which included a disclaimer in the top-left corner stating that it was “built entirely with AI imagery.” She warned, however, that “there will be those who will not want to disclose that it’s AI-generated, and we want to protect against that, particularly when we look at the political season before us.”
While generative AI has not played a prominent role in political campaigns until now, digitally altered videos and audio have been used to spread mis- and disinformation in recent years. Ahead of the 2020 elections, a video of then-House Speaker Nancy Pelosi, D-Calif. was manipulated to make it appear as though she was intoxicated while giving a speech. That video, as well as a similarly doctored video from 2019, received millions of views on social media platforms. (4)
“The federal government aims to capitalize on the rapid innovation in the artificial intelligence sector.”
The Biden administration is opening seven new artificial intelligence laboratories, fueled by $140 million in federal funding, the White House announced Thursday. The National Science Foundation will helm operations, with support from fellow government agencies. The institutes will focus on six research topics:
The broad goals within these research initiatives are to harness AI technologies to support human health and development research, support cyber defenses and aid climate-resilient agricultural practices.
Public sector-funded research and enhanced private sector cooperation are two of the new commitments the Biden administration will incorporate in its evolving tech policy surrounding emerging and critical technology systems.
The major influx in new funding signals the federal government’s intent to continue innovation in AI and machine learning technologies, while simultaneously working to mitigate risks posed by more generative technologies. (5)
“The bipartisan legislation…aims to help public sector employees catch up with advancing AI technologies.”
New legislation is responding to the federal workforce’s knowledge gap concerning advancing artificial intelligence systems, with a team of bipartisan senators aiming to create a new training program specifically for leaders at government agencies.
Introduced by Sens. Gary Peters, D-Mich., and Mike Braun, R-Ind., the Artificial Intelligence Leadership Training Act, first announced on May 11, establishes a subagency within the Office of Personnel Management that focuses on training covered and eligible employees in artificial intelligence systems that may be incorporated into federal operations.
The ultimate goal of the bill is to improve the federal workforce’s skills and acumen regarding AI applications, a technology that stands to continue to rapidly evolve and seep into daily functions.
“As the federal government continues to invest in and use artificial intelligence tools, decision-makers in the federal government must have the appropriate training to ensure this technology is used responsibly and ethically,” said Peters in a press release. “With AI training, federal agency leaders will have the expertise needed to ensure this technology benefits the American people and to mitigate potential harms, such as bias or discrimination.”
Some of the mandated subjects for the education program stipulated in the bill include defining AI, basic functionality, risk and benefit analyses, how data informs AI algorithms, risk mitigation techniques and a broader infrastructure to govern AI system deployment. It would also require updates to the curriculum from the OPM director.
“In the past couple of years, we have seen unprecedented development and adoption of AI across industries. We must ensure that government leaders are trained to keep up with the advancements in AI and recognize the benefits and risks of this tool,” Braun said.
Identifying risks inherent to AI systems is a key part of the proposed curriculum for the program created by the bill.
The focus on potential hazards in AI systems handling critical data comes amid sophisticated generative AI softwares capable of further spreading misinformation and mishandling data—two problems federal agencies are looking to prevent.
“The training aims to help federal leaders understand the capabilities, risks, and ethical implications associated with AI, so they can better determine whether an AI capability is appropriate to meet their mission requirements,” Peters’ press release says.
The Artificial Intelligence Leadership Training Act takes cues from government officials who have long advocated for a more tech-savvy workforce, and compliments the AI Training Act, another bill introduced by Peters that became law in October 2022.
Several other similar pieces of legislation have circulated on Capitol Hill in recent years that intend to address existing knowledge gaps the federal workforce has concerning advanced, emerging technologies like AI. (6)
https://oodaloop.com/ooda-original/2023/04/26/the-cybersecurity-implications-of-chatgpt-and-enabling-secure-enterprise-use-of-large-language-models/
https://oodaloop.com/archive/2019/02/27/securing-ai-four-areas-to-focus-on-right-now/
https://oodaloop.com/archive/2023/05/09/ai-enabled-image-generation-midjourney-dall-e-stable-diffusion/
https://oodaloop.com/archive/2023/05/08/a-methodological-note-chatgpt-is-not-ready-for-criminal-network-analysis-from-unstructured-data/
https://oodaloop.com/archive/2023/05/08/the-ooda-network-on-the-real-danger-of-ai-innovation-at-exponential-speed-and-scale-and-not-adequately-addressing-ai-governance/