Start your day with intelligence. Get The OODA Daily Pulse.
One of the most powerful takeaways from OODAcon 2023 was a qualitative anecdote shared on the “Cyber Risks of Emerging Technologies” panel discussion: “We are fast approaching a “tipping point” – when non-human generated content will vastly outnumber the amount of human generated content.” A general quantitative validation has also been floating around: “90% of Online Content will be AI-generated by 2026”. We took the time to validate this quant and flesh out one of the sub-themes from the OODA Almanac 2024 inspired by this “tipping point” – The Anti-content Movement: “In the deluge of content generated by AI, the human element becomes a coveted rarity, a beacon of authenticity in a sea of algorithmically crafted narratives.”
“Experts estimate that as much as 90 percent of online content may be synthetically generated by 2026,” the report warned, adding that synthetic media “refers to media generated or manipulated using artificial intelligence.”
“Don’t believe everything you see on the Internet” has been pretty standard advice for quite some time now. And according to a new report from European law enforcement group Europol, we have all the reason in the world to step up that vigilance. “In most cases, synthetic media is generated for gaming, to improve services or to improve the quality of life,” the report continued, “but the increase in synthetic media and improved technology has given rise to disinformation possibilities.”
“On a daily basis, people trust their own perception to guide them and tell them what is real and what is not,” reads the Europol report. “Auditory and visual recordings of an event are often treated as a truthful account of an event. But what if these media can be generated artificially, adapted to show events that never took place, to misrepresent events, or to distort the truth?”
We vetted the primary source document of this 90% metric quote from Europol – and we could not find the exact quote anywhere in the report. It is still a very interesting report, however, on deepfakes specifically. Strange, as the exact Europol quote has been picked up by many mainstream media outlets (Gartner, Psychology Today) and a slew of middling website. Stay tuned for a future OODA Loop analysis on this Europol report – as it is really insightful.
For our current purspose, the metrics suggested by the quote attributed to the Europol report remain valid as a scenario which will guide our research over the course of 2024: In this scenario, 90 percent of online content will be synthetically generated by 2026. And here is the full quote from OODAcon 2023 that inspired this OODA Loo research theme for 2024:
“We are fast approaching a “tipping point” – when non-human generated content will vastly outnumber the amount of human generated content: We are currently in an era where there is an unprecedented amount of data available. This abundance of data has far exceeded the volume of data that humans can create organically. Generative data, created by AI systems, is becoming the dominant source of content creation.”
With that scenario hypothesis and the quote from OODacon in mind, we have begun to created a research baseline on 1) the public’s relationship relationship with and perception of artificial intelligence and 2) some frameworks for how to think about the impact of AI-generated synthetic media on critical thinking skills and the overall cognitive infrastructure – globally and in the U.S.
First, here is a sampling from a survey (conducted by the UK-based Public First) of the overall public perceptions of AI since the hype cycle induced by the release of OpenAI’s ChatGPT 3.0 in October 2022:
According to a Forbes Advisor Survey: 76 percent of consumers are concerned with misinformation from artificial intelligence, and only 56 percent believe they can tell the difference.
In essence, the antidote to the malaise of misinformation and the challenges posed by AI-generated content lies in reinvigorating the human elements of our digital ecosystems. Through education, community engagement, and human curation, we can navigate the tumultuous waters of the information age with discernment and resilience.
Echo Chambers and Filter Bubbles
This all comes down to issues as framed by the discipline of social psychology and the fact that humans – differentiated from other species on the planet – are deeply, deeply social animals.
The realm of social psychology provides a fascinating lens through which to examine the negative impacts of AI-generated synthetic media, social media and misinformation, particularly in how these phenomena influence individual and collective behavior, attitudes, and societal norms. At the heart of this exploration is the understanding that social media platforms, while revolutionary in fostering global connectivity and the democratization of information dissemination, also serve as fertile ground for the rapid spread of misinformation and the manipulation of public opinion.
One of the primary concerns highlighted by social psychology is the concept of “echo chambers” or “filter bubbles,” where users are increasingly exposed to information that reinforces their pre-existing beliefs and biases. This phenomenon is exacerbated by the algorithms that underpin social media platforms, designed to maximize engagement by presenting content that aligns with an individual’s preferences. The consequence is a polarized information ecosystem where divergent viewpoints are seldom encountered, and the opportunity for critical engagement and discourse is diminished.
Often, the virality of misinformation on social media can be attributed to its often sensationalist nature, designed to elicit strong emotional responses such as fear, anger, or indignation. Social psychology posits that individuals are more likely to share content that evokes such emotions, thereby facilitating the rapid and widespread dissemination of false information. This dynamic not only undermines the quality of public discourse but also erodes trust in legitimate news sources and institutions, fostering a climate of skepticism and cynicism.
What are the implications if 90% of content is AI-generated by 2026, while a huge percentage of people engaging this content still believe that it is human generated and – in fact – a human interaction?
The impact of AI-generated content, social media and misinformation extends beyond the digital sphere, influencing real-world behaviors and attitudes. For instance, the spread of false information regarding public health measures can lead to non-compliance with guidelines designed to protect community well-being, thereby exacerbating public health crises. Similarly, misinformation campaigns targeting electoral processes can undermine the integrity of democratic institutions and erode public confidence in the electoral system.
Addressing the challenges posed by AI-generated content, social media and misinformation requires a multifaceted approach that:
Navigating the boom in automated content and increasing misinformation.
Ironically, this article Psychology Today leads with a paragraph that includes the aforementioned, non-existent Europol quant:
“A report from Europol earlier this year warned that “as much as 90 percent of online content may be synthetically generated by 2026,” referring to “media generated or manipulated using artificial intelligence.”
The remaining question is to what extent this will impact people’s critical thinking skills, and whether it really matters?
Psychologists at the University of Cambridge recently developed the first, validated “misinformation susceptibility test” (MIST), which highlights the degree to which an individual is susceptible to fake news. Younger Americans (under 45) performed worse than older Americans (over 45) on the misinformation test, scoring 12 out of 20 correctly, compared to 15 out of 20 for older adults. This was in part correlated to the amount of time spent online consuming content, indicating the relevance of how you spend your recreational time.
A lot of the current critical thinking tools encourage individuals to employ lateral thinking techniques, where we actively seek information from multiple sources, or to employ techniques such as inversion thinking, where we actively seek information that contradicts our own views. What remains to be measured, however, is how effective these tools will be in a content landscape that is up to 90 percent generated by AI and which can be rolled out and reproduced across thousands of websites en masse.
What will be essential, therefore, is to equip ourselves with tools that don’t rely on cross-checking information, such as:
The year 2024 will require a reorientation to new realities, largely driven by the acceleration of disruptive technologies grinding against the inertia of stale institutions that would rather we snack on the comfort food of the past than the buffet of the future. In past Almanacs we’ve talked about the rapid acceleration of technology and the power of exponentials and 2024 forward will mark the move from theoretical disruption to practical disruption. Those technologies we could not comprehend utilizing over the past five years will feel commonplace after the next five years.
Each year, the OODA Almanac is the edgiest piece we publish as we take the opportunity to not only provoke your thinking with disruptive ideas but also seek to peer out over the edge into the unknown. We hope the concepts discussed here help you reorient around what’s next, but also around what is possible.
From this year’s Almanac, two specific sub-themes of the Almanac entitled The Anti-content Movement and Not Just New Technologies but New Realities are apropos to this discussion:
The Anti-content Movement
The challenge will lie in distinguishing authentic human interaction from these high-quality, AI-generated facades, a task that will require innovative detection capabilities and a deep understanding of the nuances of human communication.
In the deluge of content generated by AI, the human element becomes a coveted rarity, a beacon of authenticity in a sea of algorithmically crafted narratives. This forecasted anti-content movement is not merely a reactionary step backward but a recalibration of value in the digital age. The essence of human creativity, the nuances of emotion, and the irreplaceable nature of personal experience will be elevated, creating a renaissance of verifiable human-centric content. Trust and affinity groups like the OODA Network will indeed rise as the arbiters of this new era, curating human expertise and interaction..
The integrity of human-generated content will become paramount, and our cybersecurity strategies must be agile enough to defend against the sophisticated manipulations that generative AI can produce, such as creating synthetic personas with high viral reach and believability.
Not Just New Technologies but New Realities
“We might engage in bar arguments of the reorientation required in the world imagined by William Gibson, but we will fight wars over the reorientation necessary to inhabit the landscape envisioned by Philip K. Dick.”
William Gibson transcended the future, where Philip K. Dick transcended reality. The future of the next ten years will be more closely aligned with Dick than Gibson. Gibson’s prescient visions of cyberpunk landscapes and the matrix have certainly shaped our understanding of a digital future. His narratives often hinge on the interplay between humanity and technology, forecasting a world where the two become inextricably linked. In contrast, Philip K. Dick’s work delves into the nature of reality itself, questioning the very fabric of existence and the human experience. His stories grapple with themes of identity, consciousness, and the nature of truth—concepts that are increasingly relevant in an era defined by deepfakes, misinformation, and the erosion of shared objective realities.
As we look to the next decade, it seems plausible that the themes explored by Dick will resonate more deeply with our societal trajectory. The rapid advancement of technology has brought us to a point where the manipulation of reality—be it through augmented reality, virtual reality, or artificial intelligence—is not just possible but becoming commonplace. The blurring lines between what is real and what is synthetic challenge our perceptions and could lead to a future that feels more akin to the surreal and often dystopian worlds depicted by Dick.
This is not to say that Gibson’s influence will diminish; on the contrary, his insights into the interconnectivity of global systems and the cybernetic enhancements of the human condition continue to unfold around us. However, the philosophical quandaries that Dick presents—such as the nature of humanity in an increasingly artificial world—may prove to be more immediately pertinent as we confront the ethical and existential implications of our technological evolution.
Reflecting on the current state of the world, it is evident that the questions raised by Dick’s work are not just philosophical musings but pressing concerns. The struggle to discern truth from fabrication, to maintain a sense of self amidst a barrage of algorithmically curated content, and to find meaning in a world where traditional narratives are constantly being upended, are challenges we grapple with daily. In this sense, Dick’s transcendence of reality may indeed be the guiding theme for the next ten years. We might engage in bar arguments of the reorientation required in the world imagined by Gibson, but we will fight wars over the reorientation necessary to inhabit the landscape envisioned by Dick.
The path forward requires a concerted effort to harness the positive potential of these technologies while mitigating their risks. It is a journey that demands vigilance, innovation, and, above all, a commitment to preserving the integrity of our shared reality.
The advent of AI-generated content and synthetic media heralds a transformative era in the digital landscape, one that is poised to fundamentally alter our perception of reality. This shift is not merely technological but deeply philosophical, challenging the very bedrock of how we discern truth from fabrication.
AI-generated content, with its capacity to produce convincingly authentic material at unprecedented scales, introduces a new dimension to the concept of reality as we know it. The implications of this are profound, particularly when considering the potential for large-scale social media manipulation. The ability of generative AI to create content that appears genuine—whether it be text, images, or videos—means that the line between fact and fiction becomes increasingly blurred. This is not a distant future scenario but a present reality, with state and non-state actors already leveraging these capabilities to influence public opinion and political landscapes.
Moreover, the rise of synthetic media, such as deepfakes, exacerbates this challenge. These technologies enable the creation of hyper-realistic digital falsifications, making it difficult to distinguish between genuine and manipulated content. The potential for misuse is vast, ranging from disinformation campaigns designed to sow discord and manipulate elections, to more personal attacks aimed at individuals or organizations.
However, it’s crucial to recognize that this technological evolution also spurs innovation in detection and mitigation strategies. The development of tools and techniques to identify AI-generated content and authenticate digital media is advancing, albeit in a perpetual race against the capabilities of those seeking to exploit these technologies for malicious purposes.
AI-generated content and synthetic media are reshaping our perception of reality, presenting both challenges and opportunities. As we stand at this crossroads, the path forward requires a concerted effort to harness the positive potential of these technologies while mitigating their risks. It is a journey that demands vigilance, innovation, and, above all, a commitment to preserving the integrity of our shared reality.
Considering these developments, how do you think AI-generated content and synthetic media will shape people’s perception of reality in the future?
By engaging in these communal exchanges, we not only challenge our own preconceptions but also build bridges across the chasms that misinformation seeks to widen.
In the labyrinth of digital discourse, where AI-generated content and misinformation proliferate with alarming velocity, the role of human interaction emerges as both a bulwark and a beacon.
In this context, the role of human discernment and critical thinking becomes paramount. As we navigate this new terrain, fostering media literacy and a critical mindset among the populace is essential. This involves not only the ability to question and verify the authenticity of information but also an understanding of the motivations behind content creation and dissemination. The essence of countering these challenges lies not merely in technological solutions but in reinvigorating the very fabric of human connectivity and critical engagement:
In your experiences, how do you currently see human interaction playing a role in mitigating the negative effects of AI-generated content, misinformation, and social media?
NOTE: This OODA Loop Original Analysis was partially generated with the cognitive augmentation of and in collaboration with ALTzero Project – MattGPT.
OODA Almanac 2024 – Reorientation: The year 2024 will require a reorientation to new realities, largely driven by the acceleration of disruptive technologies grinding against the inertia of stale institutions that would rather we snack on the comfort food of the past than the buffet of the future. In past Almanacs we’ve talked about the rapid acceleration of technology and the power of exponentials and 2024 forward will mark the move from theoretical disruption to practical disruption. Those technologies we could not comprehend utilizing over the past five years will feel commonplace after the next five years.
OODA CEO Matt Devost and OODA Network Members Review the OODA Almanac 2024 – Reorientation: Every year, we also use one of our monthly meetings for a discussion of the annual OODA Almanac with the OODA Network. This conversation took place at the February 2024 OODA Network Member Meeting – which was held on Friday 16, 2024.
AI Discipline Interdependence: There are concerns about uncontrolled AI growth, with many experts calling for robust AI governance. Both positive and negative impacts of AI need assessment. See: Using AI for Competitive Advantage in Business.
Benefits of Automation and New Technology: Automation, AI, robotics, and Robotic Process Automation are improving business efficiency. New sensors, especially quantum ones, are revolutionizing sectors like healthcare and national security. Advanced WiFi, cellular, and space-based communication technologies are enhancing distributed work capabilities. See: Advanced Automation and New Technologies
Emerging NLP Approaches: While Big Data remains vital, there’s a growing need for efficient small data analysis, especially with potential chip shortages. Cost reductions in training AI models offer promising prospects for business disruptions. Breakthroughs in unsupervised learning could be especially transformative. See: What Leaders Should Know About NLP
Rise of the Metaverse: The Metaverse, an immersive digital universe, is expected to reshape internet interactions, education, social networking, and entertainment. See Future of the Metaverse.
Decision Intelligence for Optimal Choices: The simultaneous occurrence of numerous disruptions complicates situational awareness and can inhibit effective decision-making. Every enterprise should evaluate their methods of data collection, assessment, and decision-making processes. For more insights: Decision Intelligence.
Embracing Corporate Intelligence and Scenario Planning in an Uncertain Age: Apart from traditional competitive challenges, businesses also confront external threats, many of which are unpredictable. This environment amplifies the significance of Scenario Planning. It enables leaders to envision varied futures, thereby identifying potential risks and opportunities. All organizations, regardless of their size, should allocate time to refine their understanding of the current risk landscape and adapt their strategies. See: Scenario Planning
Track Technology Driven Disruption: Businesses should examine technological drivers and future customer demands. A multi-disciplinary knowledge of tech domains is essential for effective foresight. See: Disruptive and Exponential Technologies.