Start your day with intelligence. Get The OODA Daily Pulse.

One of the most powerful takeaways from OODAcon 2023 was a qualitative anecdote shared on the “Cyber Risks of Emerging Technologies” panel discussion: “We are fast approaching a “tipping point” – when non-human generated content will vastly outnumber the amount of human generated content.” A general quantitative validation has also been floating around: “90% of Online Content will be AI-generated by 2026”. We took the time to validate this quant and flesh out one of the sub-themes from the OODA Almanac 2024 inspired by this “tipping point” – The Anti-content Movement: “In the deluge of content generated by AI, the human element becomes a coveted rarity, a beacon of authenticity in a sea of algorithmically crafted narratives.” 

Experts:  90% of Online Content will be AI-generated by 2026 

“Experts estimate that as much as 90 percent of online content may be synthetically generated by 2026,” the report warned, adding that synthetic media “refers to media generated or manipulated using artificial intelligence.”

“Don’t believe everything you see on the Internet” has been pretty standard advice for quite some time now. And according to a new report from European law enforcement group Europol, we have all the reason in the world to step up that vigilance.  “In most cases, synthetic media is generated for gaming, to improve services or to improve the quality of life,” the report continued, “but the increase in synthetic media and improved technology has given rise to disinformation possibilities.”

“On a daily basis, people trust their own perception to guide them and tell them what is real and what is not,” reads the Europol report. “Auditory and visual recordings of an event are often treated as a truthful account of an event. But what if these media can be generated artificially, adapted to show events that never took place, to misrepresent events, or to distort the truth?”

The Europol Report:  Facing reality?  Law enforcement and the challenge of deepfakes

We vetted the primary source document of this 90% metric quote from Europol –  and we could not find the exact quote anywhere in the report.  It is still a very interesting report, however, on deepfakes specifically.  Strange, as the exact Europol quote has been picked up by many mainstream media outlets (Gartner, Psychology Today) and a slew of middling website. Stay tuned for a future OODA Loop analysis on this Europol report – as it is really insightful. 

For our current purspose, the metrics suggested by the quote attributed to the Europol report remain valid as a scenario which will guide our research over the course of 2024:  In this scenario, 90 percent of online content will be synthetically generated by 2026.  And here is the full quote from OODAcon 2023 that inspired this OODA Loo research theme for 2024:

“We are fast approaching a “tipping point” – when non-human generated content will vastly outnumber the amount of human generated content:   We are currently in an era where there is an unprecedented amount of data available. This abundance of data has far exceeded the volume of data that humans can create organically. Generative data, created by AI systems, is becoming the dominant source of content creation.”

With that scenario hypothesis and the quote from OODacon in mind, we have begun to created a research baseline on 1) the public’s relationship relationship with and perception of artificial intelligence and 2) some frameworks for how to think about the impact of AI-generated synthetic media on critical thinking skills and the overall cognitive infrastructure – globally and in the U.S. 

Current Public Perception of AI

First, here is a sampling from a survey (conducted by the UK-based Public First) of the overall public perceptions of AI since the hype cycle induced by the release of OpenAI’s ChatGPT 3.0 in October 2022:

  • Just 5% of our survey respondents said they were not familiar with AI, while nearly half of adults under 35 (46%) said they had already used ChatGPT. Only 8% said that recent AI progress had been slower than they expected, while 38% said it had been faster.
  • Overall, we saw mixed emotions around the rise of AI. The most commonly expressed emotion was curiosity, but otherwise we saw nearly equal excitement and worry.
  • 40% of current workers told us that they believed an AI could do their job better than them in the next decade, while 64% said that they expected AI to somewhat or significantly increase unemployment.
  • The public expects recent progress in AI to continue. Over two-thirds (67%) told us that they would not be surprised to learn that, in the next five years, a photorealistic scene from a TV or movie show was entirely AI generated, or that a car manufacturer could start running a factory entirely with robots (67%). 53% said that they would not be surprised if robotic soldiers were used in active warfare. (Note: our tracking reflects that all of these realities will have happen well before a 5 year timeframe).  
  • When we asked about different use cases, the public were highly supportive of using AI to give earlier warning of new medical conditions or help detect welfare fraud. By contrast, they were much less comfortable with AI being used to decide or to advise when it came to detecting guilt, either in a criminal or military context.
  • When we asked who should decide what AI tools are allowed to be used for, by far the most common responses were national governments, regulators and the legal system. Just 21% believed that this decision should lay in the hands of the developers of the AI system, and 16% the user.
  • 62% of respondents supported the creation of a new government regulatory agency, similar to the Medicines and Healthcare Products Regulatory Agency (MHRA), to regulate the use of new AI models. This was supported by nearly every demographic, no matter their age, political allegiance, economic liberalism or general level of tech optimism.
  • Just 20% told us that they believed AI companies should be allowed to train on any publicly available text or images, with around the same proportion (21%) saying they should be allowed to train on any work where the creator has not opted out. By contrast, 37% believed AI companies should always need explicit permission.
  • On average, the respondents to our survey expected a human level artificial general intelligence (AGI) to arrive between 2030-2039. This is around the same time frame as current predictions in leading prediction markets, albeit faster than the consensus of AI experts.
  • When asked to compare the intelligence of the most advanced AIs today to other animals, the closest analogue was seen to be a human adult (27%), with just 10% thinking their intelligence was closest to a dog, 2% to a pig or 1% a sheep.
  • Overall, 32% thought advanced AI would make us safer, compared to 18% who thought it would make us less safe. When asked about specific risks from advanced AI, the most important were perceived to be increasing unemployment (49%) and more dangerous military robots (39%).
  • However, a significant minority were also worried about existential risks from AI. 29% said that an important risk was an advanced AI trying to take over or destroy human civilisation, and 20% thought it was a real risk that it could cause a breakdown in human civilisation in the next fifty years.
  • When asked their judgement on the overall probability that an advanced AI could cause humanity to go extinct in the next hundred years, around half thought that this was highly unlikely: below 1 in 100. By contrast, just over a fifth (21%) thought that that was a significant probability, with at least a 10% possibility.

According to a Forbes Advisor Survey:  76 percent of consumers are concerned with misinformation from artificial intelligence, and only 56 percent believe they can tell the difference. 

What Next? 

In essence, the antidote to the malaise of misinformation and the challenges posed by AI-generated content lies in reinvigorating the human elements of our digital ecosystems. Through education, community engagement, and human curation, we can navigate the tumultuous waters of the information age with discernment and resilience.

Echo Chambers and Filter Bubbles

This all comes down to issues as framed by the discipline of social psychology and the fact that humans – differentiated from other species on the planet – are deeply, deeply social animals. 

The realm of social psychology provides a fascinating lens through which to examine the negative impacts of AI-generated synthetic media, social media and misinformation, particularly in how these phenomena influence individual and collective behavior, attitudes, and societal norms. At the heart of this exploration is the understanding that social media platforms, while revolutionary in fostering global connectivity and the democratization of information dissemination, also serve as fertile ground for the rapid spread of misinformation and the manipulation of public opinion. 

One of the primary concerns highlighted by social psychology is the concept of “echo chambers” or “filter bubbles,” where users are increasingly exposed to information that reinforces their pre-existing beliefs and biases. This phenomenon is exacerbated by the algorithms that underpin social media platforms, designed to maximize engagement by presenting content that aligns with an individual’s preferences.  The consequence is a polarized information ecosystem where divergent viewpoints are seldom encountered, and the opportunity for critical engagement and discourse is diminished.

Often, the virality of misinformation on social media can be attributed to its often sensationalist nature, designed to elicit strong emotional responses such as fear, anger, or indignation. Social psychology posits that individuals are more likely to share content that evokes such emotions, thereby facilitating the rapid and widespread dissemination of false information.  This dynamic not only undermines the quality of public discourse but also erodes trust in legitimate news sources and institutions, fostering a climate of skepticism and cynicism.

What are the implications if 90% of content is AI-generated by 2026, while a huge percentage of people engaging this content still believe that it is human generated and  – in fact  – a human interaction? 

The impact of AI-generated content, social media and misinformation extends beyond the digital sphere, influencing real-world behaviors and attitudes. For instance, the spread of false information regarding public health measures can lead to non-compliance with guidelines designed to protect community well-being, thereby exacerbating public health crises.  Similarly, misinformation campaigns targeting electoral processes can undermine the integrity of democratic institutions and erode public confidence in the electoral system.

Addressing the challenges posed by AI-generated content, social media and misinformation requires a multifaceted approach that:  

  1. Encompasses technological solutions;
  2. Educational initiatives aimed at enhancing digital literacy;
  3. Regulatory measures to ensure accountability and transparency among social media platforms; and 
  4. Individuals need to cultivate a critical mindset, actively seeking out diverse sources of information and engaging in thoughtful analysis before sharing content online.

How AI-Generated Content Can Undermine Your Thinking Skills

Navigating the boom in automated content and increasing misinformation.

Ironically, this article Psychology Today leads with a paragraph that includes the aforementioned, non-existent Europol quant: 

  • AI-generated content is increasingly common; up to 90 percent of all content could be AI-generated by 2026.
  • Much of this content is mis- or disinformation, prompting concerns over AI’s societal impact.
  • Critical thinking remains essential to minimize the risk of manipulation.

“A report from Europol earlier this year warned that “as much as 90 percent of online content may be synthetically generated by 2026,” referring to “media generated or manipulated using artificial intelligence.”

The remaining question is to what extent this will impact people’s critical thinking skills, and whether it really matters?

Psychologists at the University of Cambridge recently developed the first, validated “misinformation susceptibility test” (MIST), which highlights the degree to which an individual is susceptible to fake news. Younger Americans (under 45) performed worse than older Americans (over 45) on the misinformation test, scoring 12 out of 20 correctly, compared to 15 out of 20 for older adults. This was in part correlated to the amount of time spent online consuming content, indicating the relevance of how you spend your recreational time.

A lot of the current critical thinking tools encourage individuals to employ lateral thinking techniques, where we actively seek information from multiple sources, or to employ techniques such as inversion thinking, where we actively seek information that contradicts our own views. What remains to be measured, however, is how effective these tools will be in a content landscape that is up to 90 percent generated by AI and which can be rolled out and reproduced across thousands of websites en masse.

What will be essential, therefore, is to equip ourselves with tools that don’t rely on cross-checking information, such as:

  • Ensuring we understand statistics. This means knowing as much about what they don’t say as what they do.
  • Identifying the evidence base. What grounds is the content based on, how was the research generated, and is it credible?
  • Understanding the context. A key tool in manipulation is to apply information outside of its intended context. What did it say at its original source, what context was it given in, and how has it been changed?
  • Inferring from previous information. Does it fit the standard narrative you would expect? Deepfakes are increasingly an issue—so if we see, for example, our favourite TV personality speaking out on world issues, does it fit their usual profile?
  • Asking for clarity and precision. More depth can help to uncover the expertise of the source, or the origin of the content, giving you more accurate or credible information, or highlighting instances where it is less credible
  • Remaining sceptical, but not too sceptical. We need to question what we are told, but ironically, as highlighted by the MIST study, expecting everything to be fake can make it more difficult to spot the actual fakes. Remaining open-minded will be key.”

From the OODA Almanac 2024

The year 2024 will require a reorientation to new realities, largely driven by the acceleration of disruptive technologies grinding against the inertia of stale institutions that would rather we snack on the comfort food of the past than the buffet of the future.  In past Almanacs we’ve talked about the rapid acceleration of technology and the power of exponentials and 2024 forward will mark the move from theoretical disruption to practical disruption. Those technologies we could not comprehend utilizing over the past five years will feel commonplace after the next five years. 

Each year, the OODA Almanac is the edgiest piece we publish as we take the opportunity to not only provoke your thinking with disruptive ideas but also seek to peer out over the edge into the unknown. We hope the concepts discussed here help you reorient around what’s next, but also around what is possible.

From this year’s Almanac, two specific sub-themes of the Almanac entitled The Anti-content Movement and  Not Just New Technologies but New Realities are apropos to this discussion: 

The Anti-content Movement 

The challenge will lie in distinguishing authentic human interaction from these high-quality, AI-generated facades, a task that will require innovative detection capabilities and a deep understanding of the nuances of human communication.

In the deluge of content generated by AI, the human element becomes a coveted rarity, a beacon of authenticity in a sea of algorithmically crafted narratives. This forecasted anti-content movement is not merely a reactionary step backward but a recalibration of value in the digital age. The essence of human creativity, the nuances of emotion, and the irreplaceable nature of personal experience will be elevated, creating a renaissance of verifiable human-centric content. Trust and affinity groups like the OODA Network will indeed rise as the arbiters of this new era, curating human expertise and interaction.. 

The integrity of human-generated content will become paramount, and our cybersecurity strategies must be agile enough to defend against the sophisticated manipulations that generative AI can produce, such as creating synthetic personas with high viral reach and believability.

Not Just New Technologies but New Realities

“We might engage in bar arguments of the reorientation required in the world imagined by William Gibson, but we will fight wars over the reorientation necessary to inhabit the landscape envisioned by Philip K. Dick.”

William Gibson transcended the future, where Philip K. Dick transcended reality.  The future of the next ten years will be more closely aligned with Dick than Gibson. Gibson’s prescient visions of cyberpunk landscapes and the matrix have certainly shaped our understanding of a digital future. His narratives often hinge on the interplay between humanity and technology, forecasting a world where the two become inextricably linked. In contrast, Philip K. Dick’s work delves into the nature of reality itself, questioning the very fabric of existence and the human experience. His stories grapple with themes of identity, consciousness, and the nature of truth—concepts that are increasingly relevant in an era defined by deepfakes, misinformation, and the erosion of shared objective realities.

As we look to the next decade, it seems plausible that the themes explored by Dick will resonate more deeply with our societal trajectory. The rapid advancement of technology has brought us to a point where the manipulation of reality—be it through augmented reality, virtual reality, or artificial intelligence—is not just possible but becoming commonplace. The blurring lines between what is real and what is synthetic challenge our perceptions and could lead to a future that feels more akin to the surreal and often dystopian worlds depicted by Dick.

This is not to say that Gibson’s influence will diminish; on the contrary, his insights into the interconnectivity of global systems and the cybernetic enhancements of the human condition continue to unfold around us. However, the philosophical quandaries that Dick presents—such as the nature of humanity in an increasingly artificial world—may prove to be more immediately pertinent as we confront the ethical and existential implications of our technological evolution.

Reflecting on the current state of the world, it is evident that the questions raised by Dick’s work are not just philosophical musings but pressing concerns. The struggle to discern truth from fabrication, to maintain a sense of self amidst a barrage of algorithmically curated content, and to find meaning in a world where traditional narratives are constantly being upended, are challenges we grapple with daily. In this sense, Dick’s transcendence of reality may indeed be the guiding theme for the next ten years.  We might engage in bar arguments of the reorientation required in the world imagined by Gibson, but we will fight wars over the reorientation necessary to inhabit the landscape envisioned by Dick.

AI-generated Content and a Shared Perception of Reality

The path forward requires a concerted effort to harness the positive potential of these technologies while mitigating their risks. It is a journey that demands vigilance, innovation, and, above all, a commitment to preserving the integrity of our shared reality.  

The advent of AI-generated content and synthetic media heralds a transformative era in the digital landscape, one that is poised to fundamentally alter our perception of reality. This shift is not merely technological but deeply philosophical, challenging the very bedrock of how we discern truth from fabrication.

AI-generated content, with its capacity to produce convincingly authentic material at unprecedented scales, introduces a new dimension to the concept of reality as we know it. The implications of this are profound, particularly when considering the potential for large-scale social media manipulation. The ability of generative AI to create content that appears genuine—whether it be text, images, or videos—means that the line between fact and fiction becomes increasingly blurred. This is not a distant future scenario but a present reality, with state and non-state actors already leveraging these capabilities to influence public opinion and political landscapes.  

Moreover, the rise of synthetic media, such as deepfakes, exacerbates this challenge. These technologies enable the creation of hyper-realistic digital falsifications, making it difficult to distinguish between genuine and manipulated content.  The potential for misuse is vast, ranging from disinformation campaigns designed to sow discord and manipulate elections, to more personal attacks aimed at individuals or organizations.

However, it’s crucial to recognize that this technological evolution also spurs innovation in detection and mitigation strategies. The development of tools and techniques to identify AI-generated content and authenticate digital media is advancing, albeit in a perpetual race against the capabilities of those seeking to exploit these technologies for malicious purposes.  

AI-generated content and synthetic media are reshaping our perception of reality, presenting both challenges and opportunities. As we stand at this crossroads, the path forward requires a concerted effort to harness the positive potential of these technologies while mitigating their risks. It is a journey that demands vigilance, innovation, and, above all, a commitment to preserving the integrity of our shared reality. 

Considering these developments, how do you think AI-generated content and synthetic media will shape people’s perception of reality in the future?

Human Interaction and the Resurgence of Community is Central to a Strong Cognitive Infrastructure

By engaging in these communal exchanges, we not only challenge our own preconceptions but also build bridges across the chasms that misinformation seeks to widen.

In the labyrinth of digital discourse, where AI-generated content and misinformation proliferate with alarming velocity, the role of human interaction emerges as both a bulwark and a beacon. 

In this context, the role of human discernment and critical thinking becomes paramount. As we navigate this new terrain, fostering media literacy and a critical mindset among the populace is essential. This involves not only the ability to question and verify the authenticity of information but also an understanding of the motivations behind content creation and dissemination.  The essence of countering these challenges lies not merely in technological solutions but in reinvigorating the very fabric of human connectivity and critical engagement: 

  1. The cultivation of media literacy stands as a paramount endeavor. By empowering individuals with the skills to discern the credibility of information, evaluate sources, and understand the underlying motivations of content creators, we foster a populace less susceptible to the siren songs of misinformation 1 . This educational imperative extends beyond formal settings, infiltrating every facet of our digital lives, urging us to question rather than consume passively. The Finnish model, where media literacy is embedded within the national curriculum, exemplifies a proactive stance against disinformation, equipping citizens from a young age with the tools to navigate the complexities of the information age.
  2. The resurgence of community and the reclamation of public discourse from the clutches of algorithms underscore the significance of human interaction. In an era where echo chambers and filter bubbles insulate us from divergent viewpoints, fostering spaces for open dialogue and debate becomes crucial. These forums, whether online or offline, should champion diversity of thought, encourage empathy, and cultivate a culture of listening and learning. By engaging in these communal exchanges, we not only challenge our own preconceptions but also build bridges across the chasms that misinformation seeks to widen.
  3. The role of human curation in mitigating the impacts of AI-generated content cannot be overstated. While algorithms play a role in content dissemination, the human touch in curating, fact-checking, and contextualizing information adds a layer of authenticity and trustworthiness that machines alone cannot replicate.  This human oversight, coupled with technological advancements in detecting and flagging false information, creates a dynamic defense against the tide of disinformation.

In your experiences, how do you currently see human interaction playing a role in mitigating the negative effects of AI-generated content, misinformation, and social media?

NOTE:  This OODA Loop Original Analysis was partially generated with the cognitive augmentation of and in collaboration with ALTzero Project – MattGPT.

Additional OODA Loop Resources 

OODA Almanac 2024 – Reorientation: The year 2024 will require a reorientation to new realities, largely driven by the acceleration of disruptive technologies grinding against the inertia of stale institutions that would rather we snack on the comfort food of the past than the buffet of the future. In past Almanacs we’ve talked about the rapid acceleration of technology and the power of exponentials and 2024 forward will mark the move from theoretical disruption to practical disruption. Those technologies we could not comprehend utilizing over the past five years will feel commonplace after the next five years.

OODA CEO Matt Devost and OODA Network Members Review the OODA Almanac 2024 – Reorientation:  Every year, we also use one of our monthly meetings for a discussion of the annual OODA Almanac with the OODA Network.  This conversation took place at the February 2024 OODA Network Member Meeting – which was held on Friday 16, 2024.   

AI Discipline Interdependence: There are concerns about uncontrolled AI growth, with many experts calling for robust AI governance. Both positive and negative impacts of AI need assessment. See: Using AI for Competitive Advantage in Business.

Benefits of Automation and New Technology: Automation, AI, robotics, and Robotic Process Automation are improving business efficiency. New sensors, especially quantum ones, are revolutionizing sectors like healthcare and national security. Advanced WiFi, cellular, and space-based communication technologies are enhancing distributed work capabilities. See: Advanced Automation and New Technologies

Emerging NLP Approaches: While Big Data remains vital, there’s a growing need for efficient small data analysis, especially with potential chip shortages. Cost reductions in training AI models offer promising prospects for business disruptions. Breakthroughs in unsupervised learning could be especially transformative. See: What Leaders Should Know About NLP

Rise of the Metaverse: The Metaverse, an immersive digital universe, is expected to reshape internet interactions, education, social networking, and entertainment. See Future of the Metaverse.

Decision Intelligence for Optimal Choices: The simultaneous occurrence of numerous disruptions complicates situational awareness and can inhibit effective decision-making. Every enterprise should evaluate their methods of data collection, assessment, and decision-making processes. For more insights: Decision Intelligence.

Embracing Corporate Intelligence and Scenario Planning in an Uncertain Age: Apart from traditional competitive challenges, businesses also confront external threats, many of which are unpredictable. This environment amplifies the significance of Scenario Planning. It enables leaders to envision varied futures, thereby identifying potential risks and opportunities. All organizations, regardless of their size, should allocate time to refine their understanding of the current risk landscape and adapt their strategies. See: Scenario Planning

Track Technology Driven Disruption: Businesses should examine technological drivers and future customer demands. A multi-disciplinary knowledge of tech domains is essential for effective foresight. See: Disruptive and Exponential Technologies.

Tagged: Generative AI
Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.