Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > The FTC, DOJ, UK, and EU Antitrust Enforcers Weigh in on Fair Market Competition Across the Global AI Ecosystem

The FTC, DOJ, UK, and EU Antitrust Enforcers Weigh in on Fair Market Competition Across the Global AI Ecosystem

Pulling the thread on our 2024 “stake in the ground” analysis of government regulation (or overregulation?) of AI, and the introduction of the concept of the “decontrol” of AI,  the release this month of a joint statement this week by U.S., U.K. and E.U regulators regarding “Competition in Generative AI Foundation Models and AI Products” pointed us back to our ongoing research questions: is this statement and commitment to global collaboration and enforcement of emerging AI regulatory standards an extension of an overall climate of over-regulation of AI in the U.S. and abroad? Or it is an architecture that enhances international governmental “decontrol”? 

FTC, DOJ, and International Enforcers Issue Joint Statement on AI Competition Issues

[Last week], FTC Chair Lina M. Khan, alongside international antitrust enforcers and the Department of Justice, Antitrust Division, issued a statement affirming a commitment to protecting competition across the artificial intelligence (AI) ecosystem to ensure effective competition that provides fair and honest treatment for both consumers and businesses.

Jonathan Kanter, Assistant Attorney General with the U.S. Department of Justice; Sarah Cardell, Chief Executive Officer of the U.K. Competition and Markets Authority; and Margrethe Vestager, Executive Vice-President and Competition Commissioner for the European Commission, joined Chair Khan in the joint statement outlining AI competition risks, as well as principles that can help protect competition in the AI ecosystem.

The joint statement notes that while AI has the potential to become one of the most significant technological developments of the past couple of decades, it also raises competition risks that may prevent the full benefits of AI from being realized. All four antitrust enforcers pledged in the joint statement to remain vigilant for potential competition issues and expressed their determination to use available powers to safeguard against tactics that would undermine fair competition or lead to unfair or deceptive practices in the AI ecosystem.

To assess competition risks to AI, the joint statement stressed the importance of focusing on how emerging AI business models drive incentives, and ultimately behavior. Competition questions in AI will be fact-specific but several common principles—fair dealing, interoperability, and choice—will generally help enable competition and foster innovation, as outlined in the joint statement. While potential harms may be felt across borders, the joint statement makes it clear that U.S. decision-making will always remain independent and sovereign. The FTC along with DOJ and CMA also have a consumer protection mission and noted the need to continue to monitor potential harms to consumers that may stem from the use and application of AI.

For a PDF version of the joint statement, go to this link.

What Next?

In one of the first summaries of the statement from legal professionals, the law office of Wilson Sonsini Goodrich & Rosati weighed in with the following takeaways:

Key Takeaways

The focus on AI is not new but the statement offers three critical takeaways.

  • The agencies continue to publicly demonstrate coordination across jurisdictions. For example, in 2021, the U.S. Department of Justice (DOJ), Federal Trade Commission, and the Directorate-General for Competition of the European Commission launched the EU-US Joint Technology Competition Policy Dialogue. The Dialogue includes high-level meetings and regular staff discussion focused on competition issues in technology markets. The Dialogue recently met in April 2024.
  • The agencies recognize the evolving nature of AI but do not specify what qualifies as generative AI or other AI markets, ecosystems, or firms. The lack of clarity creates uncertainty concerning what conduct and parties are within the scope of the statement. Companies should consider whether the competition authorities could plausibly consider the markets they operate in as AI markets and if the competition authorities could construe their conduct, inter alia, as attempting to block competitors, control key inputs, or set prices. In the United States, the DOJ has filed statements of interest in cases that allege companies have used algorithmic pricing to engage in price fixing. In Europe, different stakeholders have called for increased antitrust enforcement against alleged anticompetitive conduct in the cloud computing sector on both the EU and Member State level. On June 28, 2024, the French Competition Authority released a report on competition and generative AI and recommended that the EU consider designating companies that host generative AI models as gatekeepers under the Digital Markets Act.
  • The statement signals potential scrutiny for mergers and acquisitions, minority investments, and partnerships and agreements involving AI firms, even if the firms involved do not have any horizontal overlap or significant market share or the transaction falls short of a merger. In the UK, for example, the Competition and Markets Authority (CMA) launched a Phase I merger inquiry into Microsoft’s hiring of employees of, licensing with, and investment into Inflection AI. The CMA is also investigating Microsoft’s partnership with OpenAI and Amazon’s partnership with Anthropic, though no formal Phase I reviews have yet been launched. Further, in 2024, the FTC launched an inquiry into investments in generative AI companies to understand their impact on competition. The FTC has also launched an informal inquiry into Amazon’s agreement with AI start-up Adept to hire some of the start-up’s top executives and license some of its technology. Lastly, while the EU recently concluded that it could not review Microsoft’s investments into OpenAI under its merger rules, it is investigating the partnership to assess whether it is an agreement that restricts competition on account of possible exclusivity provisions. Also, the EU recently started an investigation into Google Gemini’s integration into Samsung handsets.

Additional OODA Loop Resources

For more OODA Loop News Briefs and Original Analysis, see:

Related References:

Is the US Government Over-Regulating Artificial Intelligence?  Maybe it is time to start thinking of ways to decontrol our AI.

Decontrol AI to Accelerate Solutions:  In a previous post we asked “Is the US Government Over-Regulating Artificial Intelligence?” We followed it with some historical context and a summary of today’s government controls in “Regulations on Government Use of AI.” This post builds on those two to examine an example on how a decontrol mindset can help speed AI solutions into use.

Regulations on Government Use of AI:  In a previous post in this series we raised a question: Is the US Government Over-Regulating AI? That post discussed the word “decontrol” as a concept whose time has come in discussion of AI use in the government.  But how can we know if we are over regulated without an overview of the regulatory environment? Here we provide a short overview of the regulation of government IT with an eye towards shedding light on ways to accelerate AI through some decontrol.

Perspectives on AI Hallucinations: Code Libraries and Developer Ecosystems:  Our hypothesis on AI Hallucinations is based on a quote from the OODAcon 2024 panel, “The Next Generative AI Surprise: “Artificial Intelligence hallucinations may sometimes provide an output that is a very creative interpretation of something or an edge case that proves useful.” With that framing in mind, the following is the first installment in our survey of differing perspectives on the threats and opportunities created by AI hallucinations.

The Next Generative AI Surprise:  At the OODAcon 2022 conference, we predicted that ChatGPT would take the business world by storm and included an interview with OpenAI Board Member and former Congressman Will Hurd.  Today, thousands of businesses are being disrupted or displaced by generative AI. This topic was further examined at length at OODAcon 2023, taking a closer look at this innovation and its impact on business, society, and international politics.  The following are insights from an OODAcon 2023 discussion between Pulkit Jaiswal, Co-Founder of NWO.ai, and Bob Flores, former CTO of the CIA.

What Can Your Organization Learn from the Use Cases of Large Language Models in Medicine and Healthcare?: It has become conventional wisdom that biotech and healthcare are the pace cars in implementing AI use cases with innovative business models and value-creation mechanisms.  Other industry sectors should keep a close eye on the critical milestones and pitfalls of the biotech/healthcare space – with an eye toward what platform, product, service innovations, and architectures may have a potable value proposition within your industry. The Stanford Institute for Human-Centered AI (HAI) is doing great work fielding research in medicine and healthcare environments with quantifiable results that offer a window into AI as a general applied technology during this vast but shallow early implementation phase across all industry sectors of “AI for the enterprise.”  Details here.

Two Emergent and Sophisticated Approaches to LLM Implementation in Cybersecurity: Google Security Engineering and The Carnegie Mellon University Software Engineering Institute (in collaboration with OpenAI), have sorted through the hype – and done some serious thinking and formal research on developing “better approaches for evaluating LLM cybersecurity” and AI-powered patching: the future of automated vulnerability fixes. This is some great formative framing of the challenges ahead as we collectively sort out the implications of the convergence of generative AI and future cyber capabilities (offensive and defensive).The Origins Story and the Future Now of Generative AI: This book explores generative artificial intelligence’s fast-moving impacts and exponential capabilities over just one year.

Generative AI – Socio-Technological Risks, Potential Impacts, Market Dynamics, and Cybersecurity Implications:  The risks, potential positive and negative impacts, market dynamics, and security implications of generative AI have emerged  – slowly, then rapidly, as the unprecedented hype cycle around artificial intelligence settled into a more pragmatic stoicism  – with project deployments – throughout 2023.

In the Era of Code, Generative AI Represents National Security Risks and Opportunities for “Innovation Power”:  We are entering the Era of Code. Code that writes code and code that breaks code. Code that talks to us and code that talks for us. Code that predicts and code that decides. Code that rewrites us. Organizations and individuals prioritizing understanding how the Code Era impacts them will develop increasing advantages in the future.  At OODAcon 2023, we will be taking a closer look at Generative AI innovation and its impact on business, society, and international politics. IQT and the Special Competitive Studies Project (SCSP) recently weighed in on this Generative AI “spark” of innovation that will “enhance all elements of our innovation power” – and the potential cybersecurity conflagrations that that same spark may also light. Details here.

Cyber Risks

Corporate Board Accountability for Cyber Risks: With a combination of market forces, regulatory changes, and strategic shifts, corporate boards and directors are now accountable for cyber risks in their firms. See: Corporate Directors and Risk

Geopolitical-Cyber Risk Nexus: The interconnectivity brought by the Internet has caused regional issues that affect global cyberspace. Now, every significant event has cyber implications, making it imperative for leaders to recognize and act upon the symbiosis between geopolitical and cyber risks. See The Cyber Threat

Ransomware’s Rapid Evolution: Ransomware technology and its associated criminal business models have seen significant advancements. This has culminated in a heightened threat level, resembling a pandemic’s reach and impact. Yet, there are strategies available for threat mitigation. See: Ransomware, and update.

Challenges in Cyber “Net Assessment”: While leaders have long tried to gauge both cyber risk and security, actionable metrics remain elusive. Current metrics mainly determine if a system can be compromised without guaranteeing its invulnerability. It’s imperative not just to develop action plans against risks but to contextualize the state of cybersecurity concerning cyber threats. Despite its importance, achieving a reliable net assessment is increasingly challenging due to the pervasive nature of modern technology. See: Cyber Threat

Recommendations for Action

Decision Intelligence for Optimal Choices: Numerous disruptions complicate situational awareness and can inhibit effective decision-making. Every enterprise should evaluate its data collection methods, assessment, and decision-making processes for more insights: Decision Intelligence.

Proactive Mitigation of Cyber Threats: The relentless nature of cyber adversaries, whether they are criminals or nation-states, necessitates proactive measures. It’s crucial to remember that cybersecurity isn’t solely the IT department’s or the CISO’s responsibility – it’s a collective effort involving the entire leadership. Relying solely on governmental actions isn’t advised given its inconsistent approach towards aiding industries in risk reduction. See: Cyber Defenses

The Necessity of Continuous Vigilance in Cybersecurity: The consistent warnings from the FBI and CISA concerning cybersecurity signal potential large-scale threats. Cybersecurity demands 24/7 attention, even on holidays. Ensuring team endurance and preventing burnout by allocating rest periods are imperative. See: Continuous Vigilance

Embracing Corporate Intelligence and Scenario Planning in an Uncertain Age: Apart from traditional competitive challenges, businesses also confront unpredictable external threats. This environment amplifies the significance of Scenario Planning. It enables leaders to envision varied futures, thereby identifying potential risks and opportunities. Regardless of their size, all organizations should allocate time to refine their understanding of the current risk landscape and adapt their strategies. See: Scenario Planning

Tagged: AI DOJ FTC Regulation
Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.