Early warning systems are flashing red – indicating a growing “perfect storm” over the next 11 months as:
Foundational Large Language Models (LLMs) grow exponentially – based on the exponential use of foundational large language models worldwide);
These LLMs not only grow in size, but learn at an exponential speed based on the growth of the user data inputs – the learning model dataset is based on the storage of global input data, including questions and queries generated by this global user base.
Unintended Consquences and malificent applications of the code scales at exponential speed, scale and volume – like the application of adversarial machine learning techniques and generative AI innovation applied to mis- disinformation on social media platforms; and
Global, national, state ands local regulation and governance will not be in place quickly enough (to have any measurable impact or mitigate risk in any significant fashion – ahead of the U.S. presidential debates, primary season, party conventions and general election in November 2024; and
Over 2 billion people go to the polls globally in 2024.
Cumulatively, the result will be a “perfect storm” of all the unintended consequequences we have experienced to date based on an unregulated tech sector coupled with cyberwarfare tactics and kompromat innovation
This time out, it will all play out with exponential speed, scale and volume over the course of 2024. Harrowing? Yes. And the ships have definitey already sailed. To track, measure and “price in” the impacts of this coming storm for your organization, the first step is weeding out the cognitive bias that it is not happening.
To that ends, we have filtered out some of some the early warnings signals and the broad pattern recognition that validates the reality of this growing storm.
Featured Image:OpenAI’s DALL-E with the prompt “As Two Billion People Go to the Polls In 2024, AI and Misinformation are The Perfect Storm in the style of Cyberpunk”
More than 50 countries that are home to half the planet’s population are due to hold national elections in 2024, but the number of citizens exercising the right to vote is not unalloyed good news. The year looks set to test even the most robust democracies and to strengthen the hands of leaders with authoritarian leanings.
From Russia, Taiwan and the United Kingdom to India, El Salvador and South Africa, the presidential and legislative contests have huge implications for human rights, economies, international relations and prospects for peace in a volatile world. In some countries, the balloting will be neither free nor fair. And in many, curbs on opposition candidates, weary electorates and the potential for manipulation and disinformation have made the fate of democracy a front-and-center campaign issue.
Launched as a partnership between Meta and independent external researchers, the U.S. 2020 Facebook & Instagram Election Study has led to groundbreaking social science scholarship on social media’s political effects. Professors Natalie Stroud and Joshua Tucker led the 17-person team of external researchers, which has published four studies in Science and Nature and has additional papers currently undergoing peer-review. Prof. Stroud and Prof. Tucker joined the Berkman Center’s Institute fro Rebooting Social Media (RSM) for a discussion on the project’s findings and the process that generated them.
The Record summarized the release of the reports a major research effort (and communications campaign to support the broad distribution of the findings and recomendations) by Meta:
“Foreign interference groups are attempting to build and reach online audiencesahead of a number of significant elections next year, “and we need to remain alert,” Meta warned on Thursday. National elections are set to be held in the United States, United Kingdom and India — three of the world’s largest economies — as well as in a number of countries that have previously been targeted by foreign interference, including Taiwan and Moldova. Here’s what you need to know:
Foreign interference groups are preparing for significant upcoming elections: Meta has warned that these groups are aiming to build and reach online audiences in preparation for major elections in 2022. These elections will take place in several countries such as the United States, United Kingdom, and India, as well as Taiwan and Moldova, all of which have previously been targeted by foreign interference.
Meta continues to disrupting influence operations: In its latest adversarial threat report, Meta unveiled its findings on three separate influence campaigns – two originating from China and one from Russia. All these operations sought to influence perceptions and discussions on social media about key political issues and events in the target countries. Meta‘s efforts to tackle these campaigns involve identifying and removing “coordinated inauthentic behavior.”
Concerns remain over perception hacking and information sharing: The report also highlighted the concept of “perception hacking,” where threat actors attempt to shake trust in democratic processes and facts without actually influencing the process itself. Also, the issue of sharing intelligence with social media companies has sparked controversy amid allegations of political censorship. However, Meta claims that such sharing is necessary to timely identifying and disrupting foreign interference.”
Given the proliferation of AI deepfakes in the recent Slovakian election, it’s getting harder to tell who’s talking. But first…
The Cyber Angle
Something didn’t add up in an alleged conversation between Progressive Slovakia’s leader, Michal Simecka, and a local journalist that circulated in the run-up to Slovakia’s elections [in June 2023].
The speech was stilted and their voices flat even as the leader of the country’s main pro-European party seemed to slag local voters, discuss buying votes from the Roma minority and joke about child pornography.
If it sounded off, it’s because it was. AFP fact checkers concluded the recording was a hoax synthesized by an artificial intelligence tool trained on samples of the speakers’ voices. It was one of a handful of fakes that made the rounds on social media, messenger apps and email, including one where a person that sounded like Simecka plotted to jack up beer prices after the elections.
[Bloomberg’s] Jillian Deutsch and Daniel Hornak detailed the use of disinformation in Slovakia’s election, and Olivia Solon documented the role of AI deepfakes.
What is clear is that a new era of disinformation is dawning. While experts have long warned about the use of deepfakes to sway voters, AI is now cheap and accessible enough for anyone to try their hands at it.“Even though the deepfake was technically quite crude — you could definitely hear that this was not a real person — this recording spread rapidly,” said Daniel Milo, the head of a unit at the Slovak Interior Ministry that fights disinformation. “In one or two years’ time, you might not be able to tell the difference.”Rapidly improving technology, coupled with a number of high-profile hacks targeting voter rolls around the world, suggest the problem is only going to get worse.
“The UK’s Electoral Commission wants tighter finance laws for AI spending and use”
“British election regulators have urged politicians to pass new laws to limit spending on artificial intelligence (AI) as well as new requirements to identify AI-generated content.The rapid ascendancy of AI tools and accessibility has raised myriad concerns about the potential impact bad actors could have on major events such as elections should officials fail to provide proper guardrails. The rapid ascendancy of AI tools and accessibility has raised myriad concerns about the potential impact bad actors could have on major events such as elections should officials fail to provide proper guardrails.
A transcript…of a Center for Strategic and International Studies (CSIS) podcast from July 18, 2023 whose theme is understanding “what’s really going on to get to the truth of the matter about misinformation and artificial intelligence. Andrew Schwartz interviewed e Tiffany Hsu, who is a reporter on the technology team at the New York Times. She covers misinformation and disinformation…”
Schwartz opens the podcast with the following question: “So, I want to start with an overarching question. How do you think AI, which is changing everything these days or seems to be, how do you think AI is changing the spread of misinformation?”
Generative artificial intelligence could be used by foreign adversaries to interfere in next year’s presidential election, President Joe Biden’s nominee to lead U.S. Cyber Command and the NSA warned Thursday. Here’s what you need to know:
The possible use of generative AI by foreign adversaries in interfering with the upcoming presidential election is causing concern among US national security officials, including President Biden’s nominee to lead US Cyber Command, Air Force Lt. Gen. Timothy Haugh.
Generative AI, such as ChatGPT, represents a significant threat because of its ability to create authentic-looking content. Cybersecurity and Infrastructure Security Agency Director Jen Easterly previously called it “the biggest issue that we’re going to deal with this century.”
The confirmation of Lt. Gen. Haugh, who has experience in cybersecurity and election protection through his work with the NSA joint task force, isbeing held up in the Senate due to partisan disagreements over DoD policy. His confirmation would be a crucial step towards addressing the threats posed by artificial intelligence in election interference.
“Researchers out of the University of Zurich determine language model GPT-3 can produce “compelling disinformation” crafty enough to fool people.”
A new study out of the University of Zurich has found that OpenAI’s language model GPT-3 is capable of producing “compelling disinformation” — more so, even, than people. “GPT-3 is a double-edge sword: In comparison with humans, it can produce accurate information that is easier to understand, but it can also produce more compelling disinformation,” the study’s abstract reads, adding that “humans cannot distinguish between tweets generated by GPT-3 and written by real Twitter users.”
The study… titled “AI Model GPT-3 (Dis)informs Us Better Than Humans,” was conducted by Giovanni Spitale, Nikola Biller-Andorno and Federico Germani out of the university’s Institute of Biomedical Ethics and History of Medicine. They derived its results from 697 participants and 220 tweets. The participants more consistently identified disinformation in tweets produced by real people and more consistently identified accurate information in tweets produced by GPT-3. In other words, the artificial intelligence’s output was better at fooling people as well as informing them, producing superior results on both ends of the spectrum.
The full findings of the study are available here, as are the data that went into it (of which there is an extensive amount that granularly dissects the methodology behind the aforementioned results). If language model GPT-3 sounds familiar, that’s because it’s from the lineage of models that power ChatGPT.
AI-generated content is emerging as a disruptive political force just as nations around the world are gearing up for a rare convergence of election cycles in 2024.
Why it matters: Around one billion voters will head to polls in 2024 across the U.S., India, the European Union, the U.K. and Indonesia, plus Russia — but neither AI companies nor governments have put matching election protections in place.
AI startups tend to have few or no election policies.
After initially banning political uses of ChatGPT, OpenAI is now focused on banning “high volumes of campaign materials” and “materials personalized to or targeted at specific demographics.”
How it works: AI could upend 2024 elections via…
Fundraising scams written and coded more easily via generative AI.
A microtargeting tsunami, since AI lowers the costs of creating content for specific audiences — including delivering undecided or unmotivated voters “the exact message that will help them reach their final decisions,” according to Darrell West, senior fellow at Brooking Institution’sCenter for Technology Innovation.
Incendiary emotional fuel. Generative AI can create realist-looking images designed to inflame, such as false representations of a candidate or communities that are targets of a party’s ire.
Social media platforms, meanwhile, are cutting back on their election integrity efforts.
Meta’s election teams face an uncertain future, with another round of company layoffs expected this month. Meta policy communications director Andy Stone declined to comment on how the company is adjusting its election efforts for AI. The company spent more than $13 billion since 2016 on safety and security measures after Russian disinformation flooded Facebook during the 2016 campaign.
Geopolitical-Cyber Risk Nexus: The interconnectivity brought by the Internet has made regional issues affect global cyberspace. Now, every significant event has cyber implications, making it imperative for leaders to recognize and act upon the symbiosis between geopolitical and cyber risks. See The Cyber Threat
Ransomware’s Rapid Evolution: Ransomware technology and its associated criminal business models have seen significant advancements. This has culminated in a heightened threat level, resembling a pandemic in its reach and impact. Yet, there are strategies available for threat mitigation. See: Ransomware, and update.
Challenges in Cyber “Net Assessment”: While leaders have long tried to gauge both cyber risk and security, actionable metrics remain elusive. Current metrics mainly determine if a system can be compromised, without guaranteeing its invulnerability. It’s imperative not just to develop action plans against risks but to contextualize the state of cybersecurity concerning cyber threats. Despite its importance, achieving a reliable net assessment is increasingly challenging due to the pervasive nature of modern technology. See: Cyber Threat
Decision Intelligence for Optimal Choices: The simultaneous occurrence of numerous disruptions complicates situational awareness and can inhibit effective decision-making. Every enterprise should evaluate their methods of data collection, assessment, and decision-making processes. For more insights: Decision Intelligence.
Proactive Mitigation of Cyber Threats: The relentless nature of cyber adversaries, whether they are criminals or nation-states, necessitates proactive measures. It’s crucial to remember that cybersecurity isn’t solely the responsibility of the IT department or the CISO – it’s a collective effort that involves the entire leadership. Relying solely on governmental actions isn’t advised given its inconsistent approach towards aiding industries in risk reduction. See: Cyber Defenses
The Necessity of Continuous Vigilance in Cybersecurity: The consistent warnings from the FBI and CISA concerning cybersecurity signal potential large-scale threats. Cybersecurity demands 24/7 attention, even on holidays. Ensuring team endurance and preventing burnout by allocating rest periods are imperative. See: Continuous Vigilance
Embracing Corporate Intelligence and Scenario Planning in an Uncertain Age: Apart from traditional competitive challenges, businesses also confront external threats, many of which are unpredictable. This environment amplifies the significance of Scenario Planning. It enables leaders to envision varied futures, thereby identifying potential risks and opportunities. All organizations, regardless of their size, should allocate time to refine their understanding of the current risk landscape and adapt their strategies. See: Scenario Planning
Track Technology Driven Disruption: Businesses should examine technological drivers and future customer demands. A multi-disciplinary knowledge of tech domains is essential for effective foresight. See: Disruptive and Exponential Technologies.
Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.