Start your day with intelligence. Get The OODA Daily Pulse.

2023 has been marked by what some are characterizing as the double exponential pace of artificial intelligence innovation and commercial deployment.  Following are some of the most prescient developments of the last few weeks, including:

  • Europe moves ahead on AI regulation, challenging tech giants’ power
  • SEC Probes Investment Advisers’ Use of AI
  • Microsoft-OpenAI Partnership Draws Scrutiny From U.K. Regulator
  • IBM, Meta launch AI alliance aimed at open source safety
  • BlackRock rolls out GenAI to staff and clients
  • Elon Musk-Backed X.AI Files With SEC to Raise Up to $1B in Equity Offering
  • Google unveils Gemini
  • OpenAI Rival Mistral Nears $2 Billion Valuation With Andreessen Horowitz Backing
  • Anthropic Cultivates Alternatives
  • Unpacking the hype around OpenAI’s rumored new Q* model

Global AI Regulation and Governance

Europe moves ahead on AI regulation, challenging tech giants’ power

“Brussels brought a new antitrust challenge against Google on the same day European lawmakers voted to approve the E.U. AI Act — lapping counterparts in the U.S., where legislation has languished”

From the WP: European Union lawmakers on [December 6ht] took a key step toward passing landmark restrictions on the use of artificial intelligence, putting Brussels on a collision course with American tech giants funneling billions of dollars into the burgeoning technology.  

The European Parliament overwhelmingly approved the E.U. AI Act, a sweeping package that aims to protect consumers from potentially dangerous applications of artificial intelligence. Government officials made the move amid concerns that recent advances in the technology could be used to nefarious ends, ushering in surveillance, algorithmically driven discrimination and prolific misinformation that could upend democracy. E.U. officials are moving much faster than their U.S. counterparts, where discussions about AI have dragged on in Congress despite apocalyptic warnings from even some industry officials.

The legislation takes a “risk-based approach,” introducing restrictions based on how dangerous lawmakers predict an AI application could be. It would ban tools that European lawmakers deem “unacceptable,” such as systems allowing law enforcement to predict criminal behavior using analytics. It would introduce new limits on technologies simply deemed “high risk,” such as tools that could sway voters to influence elections or recommendation algorithms, which suggest what posts, photos and videos people see on social networks.

According to the The Technocrat, here are some of the major implications:

  1. Ban on emotion-recognition AI. The European Parliament’s draft text bans the use of AI that attempts to recognize people’s emotions in policing, schools, and workplaces. Makers of emotion-recognition software claim that AI is able to determine when a student is not understanding certain material, or when a driver of a car might be falling asleep. The use of AI to conduct facial detection and analysis has been criticized for inaccuracy and bias, but it has not been banned in the draft text from the other two institutions, suggesting there’s a political fight to come.

  2. Ban on real-time biometrics and predictive policing in public spaces. This will be a major legislative battle, because the various EU bodies will have to sort out whether, and how, the ban is enforced in law. Policing groups are not in favor of a ban on real-time biometric technologies, which they say are necessary for modern policing. Some countries, like France, are actually planning to increase their use of facial recognition

  3. Ban on social scoring. Social scoring by public agencies, or the practice of using data about people’s social behavior to make generalizations and profiles, would be outlawed. That said, the outlook on social scoring, commonly associated with China and other authoritarian governments, isn’t really as simple as it may seem. The practice of using social behavior data to evaluate people is common in doling out mortgages and setting insurance rates, as well as in hiring and advertising. 

  4. New restrictions for gen AI. This draft is the first to propose ways to regulate generative AI, and ban the use of any copyrighted material in the training set of large language models like OpenAI’s GPT-4. OpenAI has already come under the scrutiny of European lawmakers for concerns about data privacy and copyright. The draft bill also requires that AI generated content be labeled as such. That said, the European Parliament now has to sell its policy to the European Commission and individual countries, which are likely to face lobbying pressure from the tech industry.

  5. New restrictions on recommendation algorithms on social media. The new draft assigns recommender systems to a “high risk” category, which is an escalation from the other proposed bills. This means that if it passes, recommender systems on social media platforms will be subject to much more scrutiny about how they work, and tech companies could be more liable for the impact of user-generated content.

The risks of AI as described by Margrethe Vestager, executive vice president of the EU Commission, are widespread. She has emphasized concerns about the future of trust in information, vulnerability to social manipulation by bad actors, and mass surveillance.   “If we end up in a situation where we believe nothing, then we have undermined our society completely,” Vestager told reporters on Wednesday.

SEC Probes Investment Advisers’ Use of AI

From the WSJ:  The Securities and Exchange Commission is asking investment advisers how they use and oversee artificial intelligence, as agency head Gary Gensler continues to express skepticism about the technology.  The SEC’s examinations division has sent requests for information on AI-related topics to several investment advisers, part of a process known as a sweep. The agency wants details on topics including AI-related marketing documents, algorithmic models used to manage client portfolios, third-party providers and compliance training, according to one such letter obtained by Vigilant Compliance, a regulatory compliance consulting firm.

The scrutiny comes as some advisers contemplate the adoption of AI tools:

  • BlackRock operates an AI research group co-headed by a former Google statistician and a Stanford University engineering professor. Fidelity Investments in August noted the “incredible potential” of AI in wealth management. 
  • JPMorgan Chase maintains an AI research team in New York to “advance cutting-edge research”; and 
  • Goldman Sachs has said AI is poised to support investors and help them detect trends and patterns that could be impossible for humans to identify.

Microsoft-OpenAI Partnership Draws Scrutiny From U.K. Regulator

U.K. regulators said they are examining Microsoft’s partnership with OpenAI, marking a first push by one of the world’s most influential competition authorities to scrutinize the relationship between the tech giant and the artificial intelligence company behind ChatGPT. Britain’s Competition and Markets Authority said Friday that it is seeking feedback on whether the partnership—and recent developments in the governance of OpenAI—should be considered a de facto merger, in an initial step that could lead to a formal investigation. The move comes after a dramatic turn of events at OpenAI that resulted in the abrupt firing and reinstatement of Chief Executive Sam Altman and the creation of an observer role for Microsoft on OpenAI’s board of directors. The CMA said recent developments in the company’s governance played a role in the decision to probe the relationship. Microsoft is OpenAI’s largest backer and had already invested some $13 billion in OpenAI before the boardroom drama.

The U.K. authority issued an invitation on Friday for comments on the relationship between the two companies and whether their partnership should be viewed as a merger. If the watchdog determines that the relationship meets its criteria for a merger review, that could prompt a formal investigation into whether it creates competition concerns in the artificial intelligence market. Such an investigation—if it proceeds—could lead to an order for the two companies to separate or make other structural or behavioral changes. Microsoft said its partnership with OpenAI, which began in 2019, has preserved the independence of both companies and fostered AI innovation and competition.

Open Source AI and Safety

IBM, Meta launch AI alliance aimed at open source safety

Aas reported by CIO DIVE:  The group bands together more than 50 companies and organizations to create frameworks and share information on safe development of the technology:

  • Meta and IBM launched a collective aimed at boosting AI innovation “while ensuring scientific rigor, trust, safety, security, diversity and economic competitiveness,” the two organizations announced Tuesday. 
  • The AI Alliance is composed of more than 50 startups, organizations and universities already working on the technology. The group includes AMD, Dell Technologies, Hugging Face, Oracle and Intel. 
  • “We believe it’s better when AI is developed openly – more people can access the benefits, build innovative products and work on safety,” said Nick Clegg, president, Global Affairs at Meta, in the announcement. “We’re looking forward to working with partners to advance the state-of-the-art in AI and help everyone build responsibly.”

IBM and Meta’s latest alliance will aim to establish an open community that brings together developers and researchers to jointly address safety concerns surrounding the technology. The group plans to establish a governing board and technical oversight committee to establish overall project standards and guidelines.  Microsoft and OpenAI — which have taken a leadership position in the recent generative AI wave — are notably absent from the list, as are other big tech providers such as Google and AWS.   In response to enterprise concern over the safe deployment of AI, the field of providers has steadily added new features and capabilities surrounding AI safety and privacy. 

Big Tech and Industry Sector AI

Financial Sector:  BlackRock rolls out GenAI to staff and clients

Global investment manager BlackRock has announced the rollout of generative AI tools as soon as January in a bid to embrace the nascent technology.  The world’s largest investment manager has already begun using GenAI to support its in-house risk management systems (Aladdin and eFront), a leaked internal memo reveals. BlackRock are now planning to roll the GenAI feature out to clients, who will be able to use their large language model (LLM) to extract information from Aladdin.

A memo to staff on 6 December from chief operating officer Rob Goldstein, chief innovation officer Kfir Godrich, and head of engineering for Aladdin, Lance Braunstein, read: “In the last year, we’ve seen a mass proliferation of GenAI that we believe will fundamentally change how the world operates. Like the advancement of the personal computer, internet and mobile phone, the promise and potential of GenAI to transform how we work and live is enormous.” BlackRock’s news follows an increasing number of announcements of GenAI implementation in financial services. Earlier this month, Mastercard announced the launch of a GenAI retail assistant, Commerzbank confirmed the development of a GenAI-powered virtual assistant, and a UK Finance survey found that the majority of UK banks is already piloting GenAI.

Elon Musk-Backed X.AI Files With SEC to Raise Up to $1B in Equity Offering

As reported by CoinDesk:  X.AI Corp., backed by Tesla CEO and X owner Elon Musk, is raising up to $1 billion in equity securities offering, according to a regulatory filing.  The company has already sold $134.7 million of the securities, with another $865.3 million remaining to be sold, according to a filing made with the U.S. Securities and Exchange Commission (SEC). The minimum investment accepted from any outside investor is $2 million.

The filing says that Musk, who took over Twitter and renamed it X, is an executive officer and director of X.AI.  Also listed as an executive is Jared Birchall, a former Goldman Sachs, Merrill Lynch and Morgan Stanley executive who is reported to be the manager of Elon Musk’s family office.  CoinDesk previously reported that in April 2023, when Musk merged Twitter into X Corp., he also registered X.AI as an artificial intelligence startup. The executive then established xAI, his own company, to “understand the universe.”  The announcement prompted some crypto users to spin up scores of “X” tokens on multiple blockchains.

Google unveils Gemini

[On December 6th], “Google…announced the rollout of Gemini, its largest and most capable large language model to date. Starting today, the company’s Bard chatbot will be powered by a version of Gemini, and will be available in English in more than 170 countries and territories. Developers and enterprise customers will get access to Gemini via API next week, with a more advanced version set to become available next year. How good is Gemini? Google says the performance of its most capable model “exceeds current state-of-the-art results on 30 of the 32 widely-used academic benchmarks used in LLM research and development.” Gemini also scored 90.0% on a test known as “Massive Multitask Language Understanding,” or MMLU, which assesses capabilities across 57 subjects including math, physics, history and medicine.

It is the first LLM to perform better than human experts on the test, Google said. Gemini also appears to be a very good software engineer. Last year, using an older language model, DeepMind introduced an AI system named AlphaCode that outperformed 54 percent of human coders in coding competitions. Using Gemini, Google built a next-generation version named AlphaCode 2. The sequel outperformed an estimated 85 percent of humans, the company said. Competitive coding is meaningfully different from day-to-day software engineering in some important ways: it can be both more and less difficult than what the typical engineer is asked to do. But still, the rate of progress here is striking.

OpenAI Rival Mistral Nears $2 Billion Valuation With Andreessen Horowitz Backing

Bloomberg reports:  Mistral AI is in the final stages of raising roughly €450 million ($487 million) from investors including Nvidia Corp. and Salesforce Inc. in a funding round that values the OpenAI rival at about $2 billion, according to people familiar with the deal.  The deal includes more than €325 million in equity from investors led by Andreessen Horowitz, which is in talks to invest €200 million in funding, the people said, asking not to be identified because the discussions are private. Nvidia and Salesforce agreed to contribute another €120 million in convertible debt, they said. Some of the details are in flux and may still change, the people said.

The $2 billion valuation for a company less than a year old underscores the tech world’s unbridled optimism about the future promise and profit of artificial intelligence companies. Mistral makes open-source software that powers chatbots and other generative AI tools, a field that requires considerable computing resources. It describes itself as less expensive and more efficient than US peers.  Mistral, which has emerged as one of Europe’s most prominent AI startups, was founded by former scientists from Alphabet Inc.’s DeepMind and Meta Platforms Inc. who had worked on large language models similar to those offered by Sam Altman’s OpenAI. Mistral raised a $113 million initial round in June, an enormous sum for a European tech startup.

General Catalyst, Lightspeed Venture Partners, Bpifrance and several others also participated in the round, according to the documents

Anthropic Cultivates Alternatives

Weeks after it announced a huge partnership deal with Amazon, Anthropic doubled down on its earlier relationship with Alphabet.  Anthropic, which provides large language models, agreed to use Google’s cloud-computing infrastructure in return for a $2 billion investment, The Wall Street Journal reported. The deal follows an earlier multibillion-dollar partnership that saw Anthropic commit to training new models on Amazon Web Services.

Google invested $500 million up front and will add $1.5 billion more over an unspecified time period. The new funding builds on $300 million that Google gave to Anthropic earlier in the year for a 10 percent stake in the company. Google’s current stake in Anthropic is undisclosed. 

  • Anthropic agreed to spend $3 billion on Google Cloud over four years. Anthropic will use Google’s newly available TPU v5e AI processors to scale its Claude 2 large language model for cloud customers. However, it will continue to run most of its processing on Amazon hardware.
  • The startup will use Google’s AlloyDB database to handle accounting data and BigQuery for data analysis.
  • Google Cloud CEO Thomas Kurian said Google will draw on Anthropic’s experience in AI safety techniques such as constitutional AI, a method for training large language models to behave according to a set of social values.

Unpacking the hype around OpenAI’s rumored new Q* model

While we still don’t know all the details, there have been reports that researchers at OpenAI had made a “breakthrough” in AI that had alarmed staff members. Reuters and The Information both report that researchers had come up with a new way to make powerful AI systems and had created a new model, called Q* (pronounced Q star), that was able to perform grade-school-level math. According to the people who spoke to Reuters, some at OpenAI believe this could be a milestone in the company’s quest to build artificial general intelligence, a much-hyped concept referring to an AI system that is smarter than humans. The company declined to comment on Q*. 

Social media is full of speculation and excessive hype, so [MIT Technology Review reporter Melissa Heikkilä] called some experts to find out how big a deal any breakthrough in math and AI would really be. For her full deep dive report, go to this link

Additional OODA Loop Resources

Will the “Double Exponential” Growth of Artificial Intelligence Render Global AI Governance and Safety Efforts Futile?:  Major global, multinational announcements and events related to AI governance and safety took place recently. We provide a brief overview here.   In an effort to get off the beaten path, however, and move away from these recent nation-state based AI governance efforts, two recent reports are framing some really interesting isues: How Might AI Affect the Rise and Fall of Nations? and “Governing AI at the Local Level for Global Benefit: A Response to the On-Going Calls for the Establishment of a Global AI Agency.”   

Technology Convergence and Market Disruption: Rapid advancements in technology are changing market dynamics and user expectations. See: Disruptive and Exponential Technologies.

The New Tech Trinity: Artificial Intelligence, BioTech, Quantum Tech: Will make monumental shifts in the world. This new Tech Trinity will redefine our economy, both threaten and fortify our national security, and revolutionize our intelligence community. None of us are ready for this. This convergence requires a deepened commitment to foresight and preparation and planning on a level that is not occurring anywhere. The New Tech Trinity.

AI Discipline Interdependence: There are concerns about uncontrolled AI growth, with many experts calling for robust AI governance. Both positive and negative impacts of AI need assessment. See: Using AI for Competitive Advantage in Business.

Benefits of Automation and New Technology: Automation, AI, robotics, and Robotic Process Automation are improving business efficiency. New sensors, especially quantum ones, are revolutionizing sectors like healthcare and national security. Advanced WiFi, cellular, and space-based communication technologies are enhancing distributed work capabilities. See: Advanced Automation and New Technologies

Emerging NLP Approaches: While Big Data remains vital, there’s a growing need for efficient small data analysis, especially with potential chip shortages. Cost reductions in training AI models offer promising prospects for business disruptions. Breakthroughs in unsupervised learning could be especially transformative. See: What Leaders Should Know About NLP

Daniel Pereira

About the Author

Daniel Pereira

Daniel Pereira is research director at OODA. He is a foresight strategist, creative technologist, and an information communication technology (ICT) and digital media researcher with 20+ years of experience directing public/private partnerships and strategic innovation initiatives.