Start your day with intelligence. Get The OODA Daily Pulse.

“When you invent the ship, you also invent the shipwreck; when you invent the plane, you also invent the plane crash” is a seering insight attributed to cultural theorist Paul Virilio  – and is oft quoted here at OODA Loop.
We review here some of the new “inventions” that are part and parcel of the emergence of AI as a general application technology across any and all global industry sectors and societal systems.   

We track these threats rather closely, not because we believe any are show-stoppers. We believe just about every sector of the economy and every agency of government should be accelerating smart modern IT into their enterprise and this includes smart use of AI. The only reason to track these threats is to mitigate the risks as you accelerate them. We would love your continued feedback on these topics and are preparing a new series of AI posts examining the need to “decontrol” AI to help with adoption.   

Evolving Threats

The AI-fueled Future Economy 

AI ads are sweeping across Africa

Semafor reports:

African companies from Nairobi to Lagos are in a race to use artificial intelligence to cut their marketing and advertising budgets ahead of a difficult 2024 due to economic difficulties, fueling panic over potential job losses.

Businesses are increasingly using AI-generated images, models and voices for their advertising campaigns across TV and digital platforms, lowering their advertising budgets. Ad spending in sub-Saharan Africa fell by 11.6% in 2023, according to the World Advertising Research Centre (WARC), though a slight rebound this year is expected to be driven by a 6.1% increase in South Africa.

Safaricom, East Africa’s leading telecommunications company and Kenya’s biggest advertiser, unveiled what it claimed to be Africa’s first AI-generated TV ad in August, and has since rolled out other AI-driven campaigns on different platforms.

Other Kenyan companies that have rolled out AI in ads include private school group Pioneer, which ran AI-generated TV ads, publisher Kartasi Group, which uses AI-generated images on the cover of its exercise books, and popular bread brand Supa Loaf which uses AI-generated images on its billboards. In Nigeria, Coca Cola collaborated with local influencers in an AI-powered campaign over the Christmas period.

The surge in AI-use, propelled by the popularity of generative AI over the last year, has coincided with a downturn in many of Africa’s biggest economies. In Nigeria and Kenya, businesses are grappling with depressed earnings due to difficult macroeconomic conditions including weakening local currencies and high inflation.

AI’s Climate Impact Goes beyond Its Emissions

Excerpts from a really interesting Scientific American article that is worth a full read – 

“The exact effect that AI will have on the climate crisis is difficult to calculate, even if experts focus only on the amount of greenhouse gases it emits. That’s because different types of AI—such as a machine learning model that spots trends in research data, a vision program that helps self-driving cars avoid obstacles or a large language model (LLM) that enables a chatbot to converse—all require different quantities of computing power to train and run. For example, when OpenAI trained its LLM called GPT-3, that work produced the equivalent of around 500 tons of carbon dioxide. Simpler models, though, produce minimal emissions. Further complicating the matter, there’s a lack of transparency from many AI companies…That makes it even more complicated to understand their models’ impact—when they are examined only through an emissions lens.

This is one reason experts increasingly recommend treating AI’s emissions as only one aspect of its climate footprint…Take the fossil-fuel industry. In 2019 Microsoft announced a new partnership with ExxonMobil and stated that the company would use Microsoft’s cloud-computing platform Azure. The oil giant claimed that by using the technology—which relies on AI for certain tasks such as performance analysis—it could optimize mining operations and, by 2025, increase production by 50,000 oil-equivalent barrels per day. (An oil-equivalent barrel is a term used to compare different fuel sources—it’s a unit roughly equal to the energy produced by burning one barrel of crude oil.) In this case, Microsoft’s AI is directly used to add more fossil fuels, which will release greenhouse gases when burned, to the market.

In a statement emailed to Scientific American, a Microsoft spokesperson said the company believes that ‘technology has an important role to play in helping the industry decarbonize…balancing the energy needs and industry practices of today while inventing and deploying those of tomorrow.’ The spokesperson added that the company sells its technology and cloud services to ‘all customers, inclusive of energy customers.'” 

This article also includes a brief but sophsticated discussion of the automation of advertising using AI – which ties into the Semafor article (above) in a really interest way worth including here”

 “Fossil-fuel extraction is not the only AI application that could be environmentally harmful. “There’s examples like this across every sector, like forestry, land management, farming,” says Emma Strubell, a computer scientist at Carnegie Mellon University.  This can also be seen in the way AI is used in automated advertising.

When an eerily specific ad pops up on your Instagram or Facebook news feed, advertising algorithms are the wizard behind the curtain. This practice boosts overall consumptive behavior in society, Rolnick says. For instance, with fast-fashion advertising, targeted ads push a steady rotation of cheap, mass-produced clothes to consumers, who buy the outfits only to replace them as soon as a new trend arrives. That creates a higher demand for fast-fashion companies, and already the fashion industry is collectively estimated to produce up to eight percent of global emissions. Fast fashion produces yet more emissions from shipping and causes more discarded clothes to pile up in landfills. Meta, the parent company of Instagram and Facebook, did not respond to Scientific American’s request for comment.” 

Prompt Injection and Model Inversion Attacks

OpenAI’s Custom Chatbots Are Leaking Their Secrets

You don’t need to know how to code to create your own AI chatbot. Since the start of November—shortly before the chaos at the company unfoldedOpenAI has let anyone build and publish their own custom versions of ChatGPT, known as “GPTs”. Thousands have been created: A “nomad” GPT gives advice about working and living remotely, another claims to search 200 million academic papers to answer your questions, and yet another will turn you into a Pixar character.  However, these custom GPTs can also be forced into leaking their secrets. Security researchers and technologists probing the custom chatbots have made them spill the initial instructions they were given when they were created, and have also discovered and downloaded the files used to customize the chatbots. People’s personal information or proprietary data can be put at risk, experts say.

“The privacy concerns of file leakage should be taken seriously,” says Jiahao Yu, a computer science researcher at Northwestern University. “Even if they do not contain sensitive information, they may contain some knowledge that the designer does not want to share with others, and [that serves] as the core part of the custom GPT.”

Along with other researchers at Northwestern, Yu has tested more than 200 custom GPTs, and found it “surprisingly straightforward” to reveal information from them. “Our success rate was 100 percent for file leakage and 97 percent for system prompt extraction, achievable with simple prompts that don’t require specialized knowledge in prompt engineering or red-teaming,” Yu says.  To create a custom GPT, all you need to do is message ChatGPT and say what you want the custom bot to do. You need to give it instructions about what the bot should or should not do. A bot that can answer questions about US tax laws may be given instructions not to answer unrelated questions or answers about other countries’ laws, for example. You can upload documents with specific information to give the chatbot greater expertise, such as feeding the US tax-bot files about how the law works. Connecting third-party APIs to a custom GPT can also help increase the data it is able to access and the kind of tasks it can complete.

The information given to custom GPTs may often be relatively inconsequential, but in some cases it may be more sensitive. Yu says data in custom GPTs often contain “domain-specific insights” from the designer, or include sensitive information, with examples of “salary and job descriptions” being uploaded alongside other confidential data. One GitHub page lists around 100 sets of leaked instructions given to custom GPTs. The data provides more transparency about how the chatbots work, but it is likely the developers didn’t intend for it to be published. And there’s already been at least one instance in which a developer has taken down the data they uploaded.

It has been possible to access these instructions and files through prompt injections, sometimes known as a form of jailbreaking. In short, that means telling the chatbot to behave in a way it has been told not to. Early prompt injections saw people telling a large language model (LLM) like ChatGPT or Google’s Bard to ignore instructions not to produce hate speech or other harmful content. More sophisticated prompt injections have used multiple layers of deception or hidden messages in images and websites to show how attackers can steal people’s data. The creators of LLMs have put rules in place to stop common prompt injections from working, but there are no easy fixes.  “The ease of exploiting these vulnerabilities is notably straightforward, sometimes requiring only basic proficiency in English,” says Alex Polyakov, the CEO of AI security firm Adversa AI, which has researched custom GPTs.

AI systems ‘subject to new types of vulnerabilities,’ British and US cyber agencies warn

From our friends over at The Record: 

British and U.S. cybersecurity authorities published guidance on Monday about how to develop artificial intelligence systems in a way that will minimize the risks they face from mischief-makers through to state-sponsored hackers.  “AI systems are subject to new types of vulnerabilities,” the 20-page document warns — specifically referring to machine-learning tools. The new guidelines have been agreed upon by 18 countries, including the members of the G7, a group that does not include China or Russia.

The guidance classifies these vulnerabilities within three categories: those “affecting the model’s classification or regression performance,” those “allowing users to perform unauthorized actions” and those involving users “extracting sensitive model information.” It sets out practical steps to “design, develop, deploy and operate” AI systems while minimizing the cybersecurity risk.

The NCSC in August warned about “prompt injection attacks” as an apparently fundamental security flaw affecting large language models (LLMs) — the type of machine learning used by ChatGPT to conduct human-like conversations. “Research is suggesting that an LLM inherently cannot distinguish between an instruction and data provided to help complete the instruction,” the agency’s previous paper stated.

Monday’s guidance sets out how developers can secure their systems by considering the cybersecurity risks specific to the technologies that make up AI, including by providing effective guardrails around the outputs these models generate. The most pressing issue among a panel of experts at the launch event in London was around threats such as model inversion attacks — when potentially sensitive training data can be retrieved from the trained model — rather than generative AI being manipulated to produce media that is later used to deceive people online.

Algorithms that remember: model inversion attacks and data protection law

The following is the abstract of a paper which introduces the concept of “model inversion attacks”:

Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU’s recent General Data Protection Regulation (GDPR) has been seen as a core tool for achieving better governance of this area. While the GDPR does apply to the use of models in some limited situations, most of its provisions relate to the governance of personal data, while models have traditionally been seen as intellectual property. [Researchers at the Royal Society] present recent work from the information security literature around ‘model inversion’ and ‘membership inference’ attacks, which indicates that the process of turning training data into machine-learned systems is not one way, and demonstrate how this could lead some models to be legally classified as personal data. Taking this as a probing experiment, we explore the different rights and obligations this would trigger and their utility, and posit future directions for algorithmic governance and regulation.

This article is part of the theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’.

From the introduction:  “In this paper, we argue that changing technologies may render the situation less clear-cut. We highlight a range of recent research which indicates that the process of turning training data into machine-learned systems is not one way but two: that training data, a semblance or subset of it, or information about who was in the training set, can in certain cases be reconstructed from a model. The consequences of this for the regulation of these systems is potentially hugely significant, as the rights and obligations for personal data differ strongly from the few generally thought applicable to models.

First, we introduce relevant elements of data protection law for a broad audience, contextualized by both current debates around algorithmic systems and the growing trend to trade and enable access to models, rather than to share underlying datasets. Second, we introduce model inversion and membership inference attacks, describing their set-up, and explain why data protection law is likely to classify vulnerable models as personal data. We then describe selected consequences of this change for data subjects, who would have access to new information, erasure and objection rights, and for the modellers and model recipients, who would need to consider security, ‘data protection by design’ and storage limitation implications. We conclude with a short discussion about the utility of this approach, and reflections on what this set-up tells us about desirable directions for future law-making around fast-moving technological practices.”

New AI-enabled Volume and Impact of Legacy Threat Vectors, New Attack Surfaces and Points of Entry  

British intelligence warns AI will cause surge in ransomware volume and impact

Ransomware attacks will increase in both volume and impact over the next two years due to artificial intelligence (AI) technologies, British intelligence has warned. In an all-source intelligence assessment published on Wednesday – based on classified intelligence, industry knowledge, academic material and open source – the National Cyber Security Centre (NCSC) said it was ‘almost certain’ about the increase, the highest confidence rating used by British intelligence analysts.’ Why it matters:

  1. Artificial Intelligence (AI) is set to escalate ransomware attacks over the next two years, according to a report by the National Cyber Security Centre (NCSC). AI will, by enhancing the efficiency and stealth of attacks, pave the way for more effective reconnaissance and social engineering for cyber criminals. As such, volumes and impacts are expected to greatly rise.
  2. Cyber attackers are increasingly harnessing AI technologies for their malicious activities, the intelligence warns. Aside from its use in reconnaissance, AI also significantly assists in malware and exploit development, vulnerability research, and lateral movement, making existing hacking techniques more efficient.
  3. However, the report also indicates that more sophisticated uses of AI to enhance cyber operations are only likely to be available to best-resourced threat actors. A key limitation hampering the use of AI tools for superior hacking is the need for access to high-quality exploit data required to train the AI models. This, together with scaling barriers for automated targets, acts as a check on these threat actors.

ChatGPT: OpenAI Attributes Regular Outages to DDoS Attacks [Anonymous Sudan group takes responsibility]

The popular generative AI application ChatGPT experienced recurring outages [in November 203].  The company attributed the recurring disruptions to a distributed denial of service (DDoS) attack resulting in high error rates in the API and ChatGPT itself, and said that it’s undertaking a series of countermeasures to get the service back up and running. While OpenAI has not yet commented on who is behind the attacks, hacker group Anonymous Sudan claimed responsibility for the DDoS attacks via its Telegram channel. A current check on ChatGPT did not reveal any ongoing problems, but some believe that the platform can expect plenty of attention from cyberattackers in general going forward.

Your Personal Information Is Probably Being Used to Train Generative AI Models:  Excerpts from a really interesting Scientific American article that is worth a full read – 

“In the rush to build and train ever-larger AI models, developers have swept up much of the searchable Internet. This not only has the potential to violate copyrights but also threatens the privacy of the billions of people who share information online. It also means that supposedly neutral models could be trained on biased data. A lack of corporate transparency makes it difficult to figure out exactly where companies are getting their training data—but Scientific American spoke with some AI experts who have a general idea.

Where Do AI Training Data Come From?

To build large generative AI models, developers turn to the public-facing Internet. But “there’s no one place where you can go download the Internet,” says Emily M. Bender, a linguist who studies computational linguistics and language technology at the University of Washington. Instead developers amass their training sets through automated tools that catalog and extract data from the Internet. Web “crawlers” travel from link to link indexing the location of information in a database, while Web “scrapers” download and extract that same information.

A very well-resourced company, such as Google’s owner, Alphabet, which already builds Web crawlers to power its search engine, can opt to employ its own tools for the task, says machine learning researcher Jesse Dodge of the nonprofit Allen Institute for AI. Other companies, however, turn to existing resources such as Common Crawl, which helped feed OpenAI’s GPT-3, or databases such as the Large-Scale Artificial Intelligence Open Network (LAION), which contains links to images and their accompanying captions. Neither Common Crawl nor LAION responded to requests for comment. Companies that want to use LAION as an AI resource (it was part of the training set for image generator Stable Diffusion, Dodge says) can follow these links but must download the content themselves.”

Additional OODA Loop Resources

Technology Convergence and Market Disruption: Rapid advancements in technology are changing market dynamics and user expectations. See: Disruptive and Exponential Technologies.

The New Tech Trinity: Artificial Intelligence, BioTech, Quantum Tech: Will make monumental shifts in the world. This new Tech Trinity will redefine our economy, both threaten and fortify our national security, and revolutionize our intelligence community. None of us are ready for this. This convergence requires a deepened commitment to foresight and preparation and planning on a level that is not occurring anywhere. The New Tech Trinity.

AI Discipline Interdependence: There are concerns about uncontrolled AI growth, with many experts calling for robust AI governance. Both positive and negative impacts of AI need assessment. See: Using AI for Competitive Advantage in Business.

Benefits of Automation and New Technology: Automation, AI, robotics, and Robotic Process Automation are improving business efficiency. New sensors, especially quantum ones, are revolutionizing sectors like healthcare and national security. Advanced WiFi, cellular, and space-based communication technologies are enhancing distributed work capabilities. See: Advanced Automation and New Technologies

Emerging NLP Approaches: While Big Data remains vital, there’s a growing need for efficient small data analysis, especially with potential chip shortages. Cost reductions in training AI models offer promising prospects for business disruptions. Breakthroughs in unsupervised learning could be especially transformative. See: What Leaders Should Know About NLP

Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.