Start your day with intelligence. Get The OODA Daily Pulse.

At OODAcon 2022 conference, we predicted that ChatGPT would take the business world by storm and included an interview with OpenAI Board Member and former Congressman Will Hurd.  Today, thousands of businesses are being disrupted or displaced by generative AI. This topic was further examined at length at OODAcon 2023, taking a closer look at this innovation and the impact it is having on business, society, and international politics.

Following are insights from an OODAcon 2023 discussion between Pulkit Jaiswal, Co-Founder NWO.ai and Bob Flores, former CTO, CIA.

Summary of the Panel Discussion 

“Trust and transparency in Generative AI require a human-centric approach, diverse perspectives, and transparent AI models.”

Macroeconomic questions about and impacts of Generative AI: Generative AI is causing disruptions in various sectors of the economy:  It is generating vast amounts of data and becoming a dominant source of content creation.  This raises macroeconomic questions and impacts. One such question is how generative AI affects businesses and their operations.  Thousands of businesses are being disrupted or displaced by this technology.  Additionally, generative AI raises concerns about trust and authenticity in content creation, particularly in the context of social media platforms.  The impact of generative AI on international politics is also being examined.  Overall, generative AI’s macroeconomic questions revolve around its impact on businesses, society, international politics, and the ethics of content creation and trust.  Generative AI, such as ChatGPT, is already causing disruption and displacement in the business world:  It is impacting various sectors, including Microsoft Security workstreams.  The macro impact questions of generative AI include its role in the scale of cyberattacks and the future of cyber warfare.

The Global Macroeconomic Climate and Generative AI “Irrational Exuberance”:  Inflationary pressures and deglobalization refer to two interconnected economic phenomena. Inflationary pressures occur when there is a general increase in prices, often driven by factors such as excessive money supply or increased production costs.  On the other hand, deglobalization refers to a decline in global economic integration and interconnectedness, often accompanied by protectionist policies and reduced international trade.  These trends can have various implications, such as a surge in unemployment, declining trust in institutions, and challenges for central banks in managing inflation and unemployment.  The decline in performance, corruption, and incompetence of institutions may contribute to these challenges. It is important to note that disruptive technologies and interventions in education can play a role in addressing these issues. Overall, understanding these dynamics and finding ways to mitigate their negative impacts is crucial for policymakers and economists.

It is in this climate that Generative AI is generating  nose bleed valuations.  The panelists discussed that a 30%-40% draw down is inevitable – like previous technology hype cycle corrections (i.e. the dotcom bust of the early 2000’s or the recent cryptocurrency market debacles). 

10X and 20X Leaps in Generative AI Innovation:  While the overall market valuation of Generative AI may correct on the downside,  the panel discussed that the exponential pace will be at the 10x and 20x pace of change – and that while a surge in unemployment is inevitable, it is not the intrinsic characteristic of the technology.  New jobs will be created as well.   

“While the exact role of .gov in AI transparency and mitigating risk may vary, it is clear that governmental involvement is crucial in addressing the challenges and ensuring responsible AI development.”

Case Study discussion – automation of a hedge fund?:  The reality is that quantitative funds massively underperform the S&P – so there is no evidence that an AI-fueled hedge fund would outperform other techniques for enabling computation as a tool for significant market gains, i.e. Generative AI as a logical extension of high frequency trading techniques.    

AI Transparency and the Risk of Generative AI:  There is a need for AI transparency to understand and mitigate the risks involved in the exponential deployment of Generative AI.  Explainable AI is seen as a solution to boost transparency – with a big boost predicted in 2024 in Explainable AI efforts.  As generative AI continues to advance, however, it is impacting not just business but also society and international politics.  The potential risks and implications of this technology require careful consideration.

The Role of Government in AI transparency and risk:  The involvement of government entities in AI transparency has the potential to boost explainable.  However, concerns were raised about the impact of .gov’s excessive involvement in this area  – while the risks associated with generative AI and the need for transparency highlight the importance of .gov’s role in regulating and mitigating those risks. The involvement of .gov in AI transparency can help address concerns about the authenticity and validity of generative data.  While the exact role of .gov in AI transparency and mitigating risk may vary, it is clear that governmental involvement is crucial in addressing the challenges and ensuring responsible AI development. Areas where government involvement may be practical include:

      • AI biases introduced at the model training level;  
      • Failure of models to look at diverse range of datasets; and 
      • Model Training at the labeling and annotation level

“How is local empowerment actually more effective and logical to the architectures and affordances of AI systems? What would local AI outreach look like?”

Enforcement mechanisms for a regulatory environment:  The panelists discussed that they felt they had no idea what this would look like and that it is “a huge conundrum, but the initial knew jerk reaction is that it is the right way to go.” 

Will government be able to enforce proper training of AI models? Or is it an impossible goal?:  The discussion led with the conceit that the government’s ability to enforce proper training of AI is seen as an impossible goal: Concerns about government regulations and the difficulty of enforcing them after the fact contribute to a cautious “wait and see” attitude towards AI.  Implementing regulations may be easier than enforcing them later on. The development of AI models and the need for addressing biases and ensuring trust and transparency were also recognized as challenges in the context of “enforcement” of regulations.

What role will local organizations play in the future of AI?:  A conference attendee asked this question – pushing back on the working assumption that regulation and the governance will only have impact at the international and nation-state level.  How is local empowerment actually more effective and logical to the architectures and affordances of AI systems? What would local AI outreach look like?

“…there are also discussions about the potential for human augmentation of work capabilities rather than expressly a scenario of vast employment displacement.  Generative AI has sparked a mixture of excitement and apprehension about these consequences and the need for proactive measures.”

Intellectual Property (IP) under Seige by Generative AI:  Generative AI is causing significant disruption in various domains, including the business world and content creation.  This technology is generating vast amounts of data that surpass human capacity.  As a result, generative data is becoming a dominant source of content creation.  These developments will likely introduce new dimensions to Intellectual Property (IP) rights.  Copyright was also mentioned in the context of knowledge sharing. 

The Spotify Case Study: What is the metric every business is optimizing for – the algorithmic tightening of profit margins or empowering creative content creators and the  creative content itself? AI’s role in optimizing music for individuals was discussed – potentially affecting artists’ earnings as AI filters or curates music based on a person’s preferences. 

The “Doom and Gloom” and Displacement Narratives: The Doom and gloom and displacement narrative about Generative AI refers to concerns about the disruptive impact of generative AI on society and employment, respectively.  Thousands of businesses are being disrupted or displaced by generative AI, leading to a need for regulation to address biases and failures in models. Generative AI, which involves the creation of content by AI systems, is becoming a dominant source of content creation. This innovation has significant implications for business, society, and international politics. However, there are also discussions about the potential for human augmentation of work capabilities rather than expressly a scenario of vast employment displacement.  Generative AI has sparked a mixture of excitement and apprehension about these consequences and the need for proactive measures.

“…staying updated with diverse datasets, labeling, and annotation processes is crucial.”

A fundamental reevaluation of business models from the bottom up based on Generative AI:  It is predicted that business models will need to undergo reevaluation and reengineering as AI moves from READ to WRITE mode, autonomously creating, coding, and building.  This shift may require a fundamental reevaluation of pricing models, introducing outcome-based pricing in sectors like healthcare and surgery, with fixed prices for certain surgical procedures based on an “inferred outcome” – which is a pricing system basedon desirability and effectiveness of outcome.  Insurance and healthcare will undego a complete reevaluation of the business model underpinning these sectors,  with a focus on generative AI-based augmentation of healthcare diagnostics and an overall reduction on the pricing for services – i.e. AI as a “concierge” medical service (which would completely transform how we get our medical assessments in the future). Other examples discussed were a movement away from subscription-based business models to pricing based value propositions based on outcomes and legal services will move away from “billable hours”.  These models are based on the predictive analytics and value creation wrought by Generative AI platforms and systems. The panel discussed that the business models to do “all of this” do not exist yet. Finally, traditional degrees might become less relevant or even obsolete in the future, indicating potential changes in the fundamental business model and value proposition of higher education. 

The uneven Distribution of opportunity afforded to Developed Countries versus Developing Countries: An animated discussion spread throughout the audience during this panel, concerned with the potential uneven distribution of opportunity in the AI economy across the global political economy.  A vital point was made that the assumption that there would not be opportunity in the usual geographies that are not afforded technology and innovation revolutions can sometimes be a fallacy -as connectivity and code will continue to democratized – and bottom-up, participatory, open source opportunities for AI development will make inroads globally.  In the end, the postive prognostication that HackThink and grass roots community will be empowered and empower Generative AI globally – in the Global South, etc. 

How will society keep up with these tools?:  To keep up with exponential development of these tools, it is important to consider several factors. Firstly, there is a need for proper regulation and addressing biases that may be introduced during training.  Additionally, staying updated with diverse datasets, labeling, and annotation processes is crucial.  Building bridges with the private sector and leveraging their technology can also help.  It is worth noting that the public-private partnership plays a significant role in innovation.  Furthermore, understanding the cognitive infrastructure and misinformation components related to these tools is essential.  Emphasizing the importance of cognitive infrastructure and being aware of the challenges it poses can aid in keeping up with these tools.

“AI hallucinations may sometimes provide an output that is actually a very creative interpretation of something or an edge case that proves useful.”

How do individuals keep up with these tools?:  The panel recommended to all attendees that they role up their sleeves and start using ChatGPT and LLM tools – creating feedback loops and opportunities for iterative learning by simply diving into the technology.

AI hallucinations are your friend:  It was discussed that all AI hallucinations are not inherently negative, as can sometimes be the perception of how they are discussed in the current hype cycle around Generative AI.  AI hallucinations may sometimes provide an output that is actually a very creative interpretation of something or an edge case that proves useful for the ideation explored through a query.  It is case by case. 

Comparative LLM approaches:  Generating a response from each large language model allows for different “AI agents” to dual it out.  The content or “answer” needed can be gleaned from a comparative use of LLMs as a way of testing and interpreting the outputs of each model.  

Trust, Transparency and Generative AI?:  Generative AI presents challenges regarding trust and transparency. It is suggested to place humans at the center (aka humans in the loop) and seek out communities.  Building trust in AI can be achieved by listening to diverse ideas and incorporating them into AI development.  Transparent and explainable AI models can enhance trust.  However, there is caution about government regulations and enforcing proper training of AI.  It is important to address the issue of misplaced trust in social media platforms. Trust and transparency in Generative AI require a human-centric approach, diverse perspectives, and transparent AI models.

For the program notes for this session, see The Generative AI Surprise

The full agenda for OODacon 2023 can be found here – Welcome to OODAcon 2023: Final Agenda and Event Details – including a a full description of each session, expanded speakers bios (with links to current projects and articles about the speakers) and additional OODA Loop resources on the theme of each panel.

OODAcon 2023: Event Summary and Imperatives For Action

Download a summary of OODAcon including useful observations to inform your strategic planning, product roadmap and drive informed customer conversations.  This summary, based on the dialog during and after the event, also invites your continued input on these many dynamic trends.  See:  OODAcon 2023: Event Summary and Imperatives For Action.

Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.