Start your day with intelligence. Get The OODA Daily Pulse.
Why a Pause on AI Development Is Not the Answer: An Insider’s Perspective
The thorny challenges posed by chatbots and AI are not going to disappear. And while many knowledgeable and well-intentioned people have signed a petition calling for a 6 month pause on advanced AI research, doing so is both unrealistic and unwise. What we’re dealing with is what I like to call a “hairball” tech-meets-society issue. Hairball problems are complex, tangled, and difficult to solve because they’re incredibly intricate and multifaceted. They involve so many stakeholders, intersecting domains, and competing interests that it makes them hard to address. A pause in technology research won’t help solve these uniquely human conundrums.
What will help are systematic, methodical, massive public engagements that inform pilot projects associated with the business and civil implications of artificial intelligence at the national and local levels. All of us will be affected by the promises and potential perils of the technological change in thinking presented by advancements in AI. Thus, all of us should have a voice in it, and all of us should work to ensure that our societies are both well-informed as well as ready to flourish in a rapidly changing world that will soon look vastly different.
Why Pausing AI and Chatbot Research and Development is Not the Answer
At first glance, pausing development might seem compelling given the challenges posed by Large Language Models (LLMs), however there are several reasons why this approach is flawed. First, it is essential to consider global competition. Even if every U.S. company agreed to a pause, other countries would continue their AI research, making any national or international agreement less effective.
Second, the spread of the AI presented by LLMs is already well underway. Stanford University’s “Alpaca” experiment demonstrated that an open-source LLM could be refined to match ChatGPT-3’s capabilities for under $600. This breakthrough speeds up the spread of AI, making it more accessible to various actors, including those with bad intent.
Third, history teaches us that a pause on AI could lead to secret development. Publicly stopping AI research might prompt nations to pursue advanced AI research in secret, which could have dire consequences for open societies. This scenario is akin to the 1899 Hague Convention, where major powers publicly banned poison-filled projectiles, only to continue their research in secret, eventually deploying harmful gases during World War I.
When the FCC IT Team Stood Against Bot-Submitted Comments
In 2017, as the non-partisan Chief Information Officer of the Federal Communications Commission (FCC), I found myself in the crosshairs of a political disinformation campaign facilitated by rudimentary chatbots flooding the Commission’s comment system with fake messages. As the world now debates the development of Large Language Models (LLMs), I am compelled to share my insider’s perspective on that event and why I believe pausing LLM research is a grave mistake.
Back in 2017, the FCC considered a rulemaking proposal that was highly controversial. Based on experiences in 2014, some members of the FCC’s Information Technology (IT) Team anticipated the risk that bad actors could manipulate the process by posting ghost comments – inputs potentially from fake identities. If they succeeded, they could have overwhelmed the comment system entirely and shut the entire rulemaking process down.
We previously had requested permission to implement Completely Automated Public Turing tests (CAPTCHAs) to detect bot-related comments. They are the little “Are you a human?” check boxes or prompts asking you to select pictures with cats in them that you will encounter online. We also had requested the ability to block spam. While both would have been good precautions, neither of our requests received General Counsel approval for the FCC’s commenting system.
Confronting a Bot-Generated Comment Flood in 2017
When the proceeding started in 2017, we noticed a flood of repetitive comments come into the system that had only slight variations, signs they were likely generated by primitive chatbot precursors. Later, an even larger wave of comments arrived at strange hours, when most Americans were asleep, raising further suspicions of automated origin. Finally, the onslaught reached its peak when 2.3 million comments were submitted in just two weeks, surpassing the total number of comments the FCC had ever received for a single proceeding lasting over four months. When asked by the Chairman’s office if what we were seeing was a denial-of-service attack, the conclusion based on this evidence was yes – albeit at the application layer vs. network layer which can be much harder to discern from human traffic, especially if bot-tests are not permitted. Colleague Vint Cerf, who is well versed in how the Internet operates, agreed with me on this analysis.
Despite our inability to block or remove potential spam, the FCC IT team scaled up the cloud-based system to handle the influx. As a result, the commenting system remained operational 99.4% of the time. The flood had not shut down the comment system. We had won.
Unbeknownst to me though, both political parties in the U.S. were involved in orchestrating this flood of fake comments. Instead of leadership celebrating that the comment system had stayed up, political disinformation started to be spread that we had made up the denial of service – despite the evidence. An IG investigation never asked what I saw, thought, or did. Nor did the investigation look at the content of the comments for obvious signs of spam and bot-generated comments which probably would have been the only way to distinguish legitimate human-generated comment traffic from bot-generated comment traffic if both were using Hypertext Transfer Protocol (HTTP) for delivery at the application-layer.
Ultimately, however, the FCC IT Team was vindicated. A few years later, the New York Attorney General later revealed in 2021 that 18 million of the twenty-three million comments were fraudulent, with about 8.5 million coming from companies hired by one party and another 9.3 million originating from one or more teenagers encouraged by the opposing party. This revelation highlights the ease with which powerful groups – or any individual, really – can manipulate public discourse. It also highlights the need for knowledgeable peer-review of complex, technological issues in public service.
Now in 2023 and Beyond: Proactive Approaches to AI and Society
Looking to the future, to effectively address the challenges arising from AI, we must foster a proactive, results-oriented, and cooperative approach with the public. Think tanks and universities can engage the public in conversations about how to work, live, govern, and co-exist with modern technologies that impact society. By involving diverse voices in the decision-making process, we can better address and resolve the complex challenges AI presents on local and national levels.
In addition, we must encourage industry and political leaders to participate in finding non-partisan, multi-sector solutions if civil societies are to remain stable. By working together, we can bridge the gap between technological advancements and their societal implications.
Finally, launching AI pilots across various sectors, such as work, education, health, law, and civil society, is essential. We must learn by doing on how we can create responsible civil environments where AIs can be developed and deployed responsibly. These initiatives can help us better understand and integrate AI into our lives, ensuring its potential is harnessed for the greater good while mitigating risks.
In 2019 and 2020, a group of fifty-two people asked the Administrative Conference of the United States (which helps guide rulemaking procedures for federal agencies), General Accounting Office, and the General Services Administration to call attention to the need to address the challenges of chatbots flooding public commenting procedures and potentially crowding out or denying services to actual humans wanting to leave a comment. We asked:
1. Does identity matter regarding who files a comment or not — and must one be a U.S. person in order to file?
2. Should agencies publish real-time counts of the number of comments received — or is it better to wait until the end of a commenting round to make all comments available, including counts?
3. Should third-party groups be able to file on behalf of someone else or not — and do agencies have the right to remove spam-like comments?
4. Should the public commenting process permit multiple comments per individual for a proceeding — and if so, how many comments from a single individual are too many? 100? 1000? More?
5. Finally, should the U.S. government itself consider, given public perceptions about potential conflicts of interest for any agency performing a public commenting process, whether it would be better to have third-party groups take responsibility for assembling comments and then filing those comments via a validated process with the government?
These same questions need pragmatic pilots that involve the public to co-explore and co-develop how we operate effectively amid these technological shifts. As the capabilities of LLMs continue to grow, we need positive change agents willing to tackle the messy issues at the intersection of technology and society. The challenges are immense, but so too are the opportunities for positive change. Let’s seize this moment to create a better tomorrow for all. Working together, we can co-create a future that embraces AI’s potential while mitigating its risks, informed by the hard lessons we have already learned.
Additional Resources: