Start your day with intelligence. Get The OODA Daily Pulse.

Generative AI: Socio-Technological Risks, Potential Impacts, Market Dynamics, and Cybersecurity Implications

The risks, potential positive and negative impacts, market dynamics, and security implications of generative AI have emerged  – slowly, then rapidly, as the unprecedented hype cycle around artificial intelligence settled into a more pragmatic stoicism  – with project deployments – over the course of 2023.

Generative AI: Socio-Technological Risks, Potential Impacts, Market Dynamics, and Cybersecurity Implications

Socio-Technological Risks and Potential Impacts

DoD CDAO’s “Task Force Lima” to Explore Responsible Fielding of Generative AI Capabilities:  The Department of Defense (DoD) announced the establishment of a generative artificial intelligence (AI) task force, an initiative that reflects the DoD’s commitment to harnessing the power of artificial intelligence in a responsible and strategic manner.  Deputy Secretary of Defense Dr. Kathleen Hicks directed the organization of Task Force Lima; it will play a pivotal role in analyzing and integrating generative AI tools, such as large language models (LLMs), across the DoD.  Led by the Chief Digital and Artificial Intelligence Office (CDAO), Task Force Lima will assess, synchronize, and employ generative AI capabilities across the DoD, ensuring the Department remains at the forefront of cutting-edge technologies while safeguarding national security.

GPT-3, Neural Language Models and The Risks of Radicalization: In a report released in 2020 and made possible by the OpenAI API Academic Access Program, The Middlebury Institute of International Studies – Center on Terrorism, Extremism, and Counterterrorism (CTEC) evaluated the revolutionary improvements of GPT-3 for the risk of weaponization by extremists who may attempt to use GPT-3 or hypothetical unregulated models to amplify their ideologies and recruit to their communities.

Is Japan’s Approach to AI the Future of Global, Risk-based, Multistakeholder AI Governance?:  Of the many courses of action discussed by OpenAI CEO Sam Altman when he testified before Congress earlier this week,  the creation of a global body to manage the governance of AI came up several times.   The International Atomic Energy Agency and  CERN were mentioned as models.   For now, the 2023 G7 Summit meeting in Hiroshima, Japan this weekend is the initial filter we are applying to the potential for global collaboration specific to the risks and opportunities of artificial intelligence. The Center for Strategic and International Studies (CSIS) has done some great work on this topic.

Deep Fakes and National Security:  “In 2024, one billion people around the world will go to the polls for national elections. From the US presidential election in 2024 to the war in Ukraine, we’re entering the era of deepfake geopolitics, where experts are concerned about the impact on elections and public perception of the truth.

Reducing the Risk of the Exponential Growth of Automated Influence Operations:  Of the research outlets we have discovered since the launch of OODALoop.com, the Center for Security and Emerging Technology (CSET),  OpenAI, and the Stanford Internet Observatory are best-in-class sources on topics of vital interest.  A new report  – “Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations” – is the result of a partnership between these three organizations “to explore how language models could be misused for influence operations in the future, and provide a framework for assessing potential mitigation strategies.

Is it Time for an “AI NTSB”? The Artificial Intelligence Incident Database May Already Foot the Bill: In a July 2021 policy brief “AI Accidents:  An Emerging Threat – What Could Happen and What to Do,” the Center for Security and Emerging Technology) (CSET) makes a noteworthy contribution to current efforts by governmental entities, industry, AI think tanks, and academia to  “name and frame” the critical issues surrounding AI risk probability and impact. 

Bill Gates Weighs in on the Opportunities and Responsibilities of a ChatGPT-based AI Future:  As Microsoft Chairman Bill Gates recently pointed out:  “In my lifetime, I’ve seen two demonstrations of technology that struck me as revolutionary  The first time was in 1980 when I was introduced to a graphical user interface—the forerunner of every modern operating system, including Windows…The second big surprise came just last year. I’d been meeting with the team from OpenAI since 2016 and was impressed by their steady progress.”  You can find the rest of Gates’ musing from his recent post at Gates Notes –  The Age of AI has begun – here, along with links to OODA Loop perspectives that are congruous to Gates’ point of view.  Lots of synergies between Gates’ POV and the OODA Loop editorial voice – which is great to see.

The Great GPT Leap is Disruption in Plain Sight:  From OODA CEO Matt Devost – “During my opening remarks for OODAcon 202, I noted several moments where the advancement of technology has taken me by surprise including the DARPA cyber grand challenge finale at Def Con and the images I was able to create with GPT. During our happy hour, former Congressman Will Hurd, who sits on the board of OpenAI, remarked that upcoming releases would represent a new opportunity for technology surprise.  Bob Gourley wrote about this inflection point last week as well. Over the weekend, the newly upgraded and released ChatGPT felt like one of those moments. We will continue to evaluate these technologies and put them into context for our OODA Network, but here are a few fun experiments I conducted over the weekend that provide some insight into why this technology is so disruptive.

We Are Witnessing Another Inflection Point In How Computers Support Humanity:  From OODA CTO Bob Gourley: “OODA and the many members of the OODA network have been tracking the dramatic developments in natural language processing for many reasons, largely centered around the potential impact on government and business operations. Personally I have a firm belief that this domain will see such dramatic positive change that all of us should be preparing ourselves for what it means.”

Market Dynamics

“We Have No Moat”: Tracking the Exponential Growth of Open Source LLM Performance (Latency and Throughout):  Like the Unix, Solaris, Irix, AIX Open Source operating systems that became the Linux-led open source operating system and the broader open source software movement (over a 30 to 40 year time period) the open source large language model “market” is poised to eat up market share in an exponentially fast time frame.  Hugging Face has created an “Open LLM-Perf Leaderboard”  for your organization to track, quantify and evaluate this exponential growth of LLM performance.

Broad AI Trends Reveal Opportunities for Advantage, Positive Business Outcomes and Continued Risk:  The ongoing groupthink about and pushback on ChatGPT and large language models aside, there are plenty of positive signals about the promise and opportunities of artificial intelligence, along with potential perils which should contribute to your organization’s risk awareness relative to the future of AI.

Is Deep Learning the Future of Financial Stability (or Volatility and Crisis)?:  Financial network complexity, biased algorithms, regulatory considerations, and predictive analytics (based on faulty foundational models) are only a few of the risk variables that will determine the future of AI in the finance sector.  Securities and Exchange Commission Chair Gary Gensler, in his previous life as an MIT Sloan School of Management Professor, wrote a seminal paper on deep learning and financial stability.  Find the still prescient insights from the paper here.

Five Driving Forces in the Tech Sector and the Future of Meta:  With 3 Billion Facebook users worldwide, the strategic marketing the company has ginned up about the impending arrival of the metaverse, the ongoing impact of  -mis, disinformation, information disorder, and the ongoing hobbling of the U.S. cognitive infrastructure – Meta matters.   The following post is organized around five driving forces in ‘Big Tech’ (and how they have manifested in or had a recent impact on the Meta): Ongoing Tech Sector Layoffs; Generative AI – Meta Introduces Open Source, Multisensory AI Model; Cybersecurity and the Social Media Platform Attack Surface;  Social Media and Cognitive Infrastructure; and Computational Power: On-Chip Innovation.

The Current “On-chip” Innovation and Physical Layer Market Dynamics of GPT Model Training, Crypto Mining, AI and the Metaverse:  Like the famous political campaign axiom:  “It’s the economy, stupid”, so goes a variation on the phrase that can be applied to the ongoing discussion about large language (LLM) model training, the development of the metaverse, AI, and crypto mining:  “It’s the physical layer, stupid.”  APIs, the application layer, and the cloud?  All good – but not as ephemeral and “cloud-like” as it is all made out to be sometimes.  In the end, there is a physical infrastructure running it all, and the current innovation at the physical layer  – in order to build the future  – is as interesting now as anything the future holds.   Geek out with us on these case studies from the current physical layer of GPT model training, crypto mining, AI, and metaverse innovation:

Cybersecurity (Including Autonomous Systems and National Security Implications)

When Artificial Intelligence Goes Wrong:  This special report is a guide into some of the darker sides of AI deployments.

The Challenges of and Defending Against Adversarial Machine Learning: This post is a primer on a big topic, adversarial machine learning.  More than any other development of, say, the last five to eight years, adversarial machine learning is very representative of a quote from Paul Virilio which is part of the research canon here at OODA:  “The invention of the ship was also the invention of the shipwreck.”

The Cybersecurity Implications of ChatGPT and Enabling Secure Enterprise Use of Large Language Models: ChatGPT security is emerging as a risk as well as an opportunity for operational innovation for all types of organizations.

OpenAI’s Recent Expansion of ChatGPT Capabilities Unfortunately Includes a Cybersecurity Vulnerability “In the Wild”: WolframAlpha and OpenTable are amongst sites accessed by recently released plug-ins- supported by ChatGPT – enabling the chatbot to utilize new information sources.  Soon after the release of the plug-ins,  an exploit vulnerability – CVE-2023-28432 – which affects a tool used for machine learning, analytics, and other processes – was discovered, adding to the list of recent security incidents hitting the game-changing LLM-based chatbot.

OpenAI, Hugging Face and DEFCON 31 (August 2023) on Red Teaming ChatGPT, Large Language Models, and Neural Language Models:  The OODA LLC team and members of the OODA Network have participated in or led hundreds of red teams across many divergent disciplines, ranging from strategic and tactical cyber to physical security threats – like infectious diseases or nuclear power plant targeting – to more abstract items like Joint Operating Concepts.  We recently looked for patterns in the exponential adoption of ChatGPT, large language models (LLMs), and neural language models (NLMs)  – and how it has intersected with the red teaming discipline.  Here is what we found and the results are encouraging.

Revisiting an Interview with OODA CEO Matt Devost on AI, Autonomy and ChatGPT: It is fair to say that the volume and speed of the coverage of artificial intelligence in every area of society have been dizzying.  As a response, this week Brooke Gladstone and her team over at WNYC’s On the Media have updated and re-released an episode from early January 2023 – which includes an interview with Matt.

Securing AI – Four Areas to Focus on Right Now:  It is a rarity that we can look at any one technology and know in advance that it will be incredibly impactful.  Artificial Intelligence is such a technology and that recognition affords us an opportunity to develop and deploy it securely.  This is a call to action to do just that.

Additional OODA Loop Resources

The Origins Story and the Future Now of Generative AI:  The fast moving impacts and exponential capabilities of generative artificial intelligence over the course of just one year.

In the Era of Code, Generative AI Represents National Security Risks and Opportunities for “Innovation Power”:  We are entering into the Era of Code. Code that writes code and code that breaks code. Code that talks to us and code that talks for us. Code that predicts and code that decides. Code that rewrites us. Organizations and individuals that prioritize understanding how the Code Era impacts them will develop increasing advantage in the future.  At OODAcon 2023, we will be taking a closer look at Generative AI innovation and the impact it is having on business, society, and international politics. IQT and the Special Competitive Studies Project (SCSP) recently weighed in on this Generative AI “spark” of innovation that will “enhance all elements of our innovation power” – and the potential cybersecurity conflagarations that may also be lit by that same spark. Details here.

For more OODA Loop News Briefs and Original Analysis, see OODA Loop | Generative AI.

Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.